Breaking the glass box: achieving ‘explainability’ that actually explains

Breaking the glass box: achieving ‘explainability’ that actually explains

Tied to the growing popularity of machine learning (ML) tools is the need to explain their underlying rationale. But buzzwords, like ‘glass box’, are steering the explainability conversation off course. Meanwhile, without proper investment in the tech innovations and governance methods to properly validate ML, it could proliferate throughout the financial industry without the necessary safeguards.

Maryam Akram ([email protected])

Breaking Glass

From black box to glass box

As a highly iterative process composed of layers of neural networks, ML requires considerable computational and processing power. Thanks to recent technology developments, that power is now available, enabling users to crunch data sets of unprecedented size. As long as ML has a large enough set of good quality data, it can significantly improve process automation and pattern identification – and consequently its use is booming. But there is a trade-off: the potential accuracy afforded by ML comes at the cost of ‘explainability’ – being able to see into the ‘black box’ that can confuse many users of artificial intelligence (AI).

In the context of ML, explainability refers to the ease with which the decision-making processes within deep neural networks can be elucidated. Having a transparent understanding of the decision-making process within a model is imperative for financial institutions (FIs). All decisions a model makes should be defensible, and that defensibility is increasingly required by regulators.

The explainability issue is becoming inseparable from ML, and is having an impact – even stopping promising projects in their tracks. In November 2018, for example, BlackRock blocked the deployment of two AI models developed to forecast market volatility and redemption risk because it felt they lacked explainability. Unintelligible ML can not only can cause FIs to fall foul of regulators, it can also lead to wasted investment if projects are blocked late in their development.

As the issue of explainability has intensified, the expression ‘glass box’ has emerged as a catch-all term for ‘intelligible’ technology. But trying to divide technology into intelligible/explainable or unintelligible/unexplainable creates a false dichotomy. Unfortunately, there is no guaranteed method for creating a ‘glass box’. 

Used as a buzzword, the term ‘glass box’ (and the concept behind it) does little to further the necessary investigative discourse around the need for ‘explainable AI’ (XAI). The distraction created by the hype around explainability prevents a proper discussion of the topic – which is a shame when there are real governance measures and technology innovations available now that can support the validation of ML.

Governance first: is ML the right fit?

To begin with, the issue of explainability can be circumvented by not implementing ML in the first place. As FIs and vendors invest in cutting-edge technology to engage with the ‘AI revolution’, they need to look beyond the hype and assess which tools are suitable for the task at hand. ML is not always the right candidate for the job – in some cases existing statistical techniques, such as a linear regression, are suitable without the additional explainability risk. In other cases – such as segmentation in anti-money laundering (AML) compliance – alternative tools, such as topological data analysis*, can perform the task with a higher degree of explainability. 

Technology: toward truly explainable AI

Technology innovation is another approach to the explainability issue. A variety of methods for explanation are in development – notably automatic rule extraction, linear proxy models and salience mapping – and together they comprise the growing area of XAI (see Figure 1). They pose several common challenges, however, such as accuracy versus intelligibility and interpretability. Generally, the more interpretable an explanation is, the less complete it is, and detail and accuracy inevitably get ‘lost in translation’.

Nevertheless, despite the potential challenges, innovative XAI technology can help FIs address the exlainability issue. And although these solutions require considerably more investment than glass-box options, ultimately they will prove more effective.

Figure 1: Some possible approaches to XAI

Some possible approaches to XAI

Some possible approaches to XAI

Image created by Chartis Research, based on LH Gilpin, D Bau, BZ Yuan, A Bajwa, M Specter and L Kagal, “Explaining Explanations: An Overview of Interpretability of Machine Learning,”, 2019.**

Avoid the hype and consider projects carefully

Ultimately, as the issue of explainability rises to prominence, FIs should be wary of ‘glass box’ hype. Amid the buzz around ML and AI, model explainability is essential, to understand how and why a machine reached a particular conclusion. Hyped-up terms such as ‘glass box’ are progressively hijacking the explainability discussion and driving it off course. By haphazardly characterizing technology as ‘glass box’ we are also creating unrealistic representations of transparency.

As practical real-world applications of ML materialize in the financial industry, FIs should first consider whether ML is the most appropriate tool for specific use cases. From a technology standpoint, explanatory methods are still evolving, and it is vital that FIs stay up-to-date with the latest developments – and their potential challenges. As in the case of BlackRock, FIs do not want to end up in a position where investment and development cannot be practically implemented. Moreover, they do not want to reach a stage where retrospective action to make ML explainable is costly and inefficient.

Finally, and of equal importance, the issue of explainability is inevitably going to become a prominent feature on regulators’ radar. Simply categorizing technology as ‘glass-box’ will not distract them. And without proper investment in the development of tech innovations and governance methods to validate ML, we run the risk of ML proliferating throughout the financial services without the necessary safeguards in place.

*A tool that links the spatial relationships in data to create geometric structures of the underlying data set.

**Perturbations: disturbances that causes a system to modify.

Further reading

Model Validation Solutions, 2019

Explaining Explanations: An Overview of Interpretability of Machine Learning, by Gilpin et al, Massachusetts Institute of Technology, February 2019

Points of View are short articles in which members of the Chartis team express their opinions on relevant topics in the risk technology marketplace. Chartis is a trading name of Infopro Digital Services Limited, whose branded publications consist of the opinions of its research analysts and should not be construed as advice.

If you have any comments or queries on Chartis Points of View, you can email the individual author, or email Chartis at [email protected].