Regulators need a robust taxonomy of tools before tackling AI
Regulators need a robust taxonomy of tools before tackling AI
Regulating AI is a challenge that must and will be faced. Central to effective regulation will be a robust, accurate taxonomy of the multiplicity of available AI techniques.
Iga Pilewska ([email protected]arch.com)
The challenge of regulating AI in financial services
As the number of artificial intelligence (AI) projects in the financial sector continues to increase, there is a growing need to create a more robust legislative framework around them. This is for two main reasons:
- To encourage the development of workable and reliable solutions.
- To protect users and establish a clear line of responsibility for the ‘autonomous’ algorithms at the heart of many AI tools.
But while the tools commonly referred to as AI – such as machine learning (ML), natural language processing and robotic process automation – can be used broadly in the areas of credit scoring, financial crime risk management, data management and trading, individual tools perform highly specific functions (as in the case of Naïve Bayes for data filtering, for example, or neural networks for voice recognition). One of the reasons why legislators will struggle to develop ‘holistic’ AI regulations for all sectors of financial services is that different sectors already adhere to specific rules that will automatically take precedence. So, for example, fiduciary requirements or laws designed to tackle financial crime will take precedence over AI-specific regulations.
More generally, legislators face several issues in attempting to regulate AI:
- Firstly, they cannot ignore it. Having no AI regulation at all could lead to an increase in mistakes and biases, as well as the risk of malpractice, with attendant lengthy and complicated investigations.
- Secondly, they cannot go too far the other way. Highly detailed regulations will be time-consuming to implement; as a potentially unwanted consequence, hefty compliance costs could impede the development of AI tools. In a worst-case scenario such regulations could even unnerve potential investors and stifle innovation, which is not in the interests of industry players, legislators or governments.
- Thirdly, legislators face an issue over complexity: other regulations that target data could impact the data that is available for AI tools to ingest. This sort of indirect regulatory overhang could significantly increase financial firms’ compliance challenges.
But in the drive for transparency, regulation is inevitable
Despite the challenges, though, regulation of AI is inevitable. It will also help to accelerate the trend toward transparent and explainable AI tools. Regulators and CROs demand increasing explainability from developers and users so they can understand why a system produced certain results, and already this process partially includes AI. There is growing demand for clarity around the AI tools that support regulated activities such as stress testing and which increase the focus on reporting and data lineage. According to a representative at BlackRock, the firm recently chose not to deploy two AI models because they were not explainable, even though they outperformed traditional modeling approaches.
Direct guidelines and policies on AI are already emerging (see Figure 1), although these tend to be broad initiatives that focus either on boosting investment in AI (as in the UK, China and India) or on establishing a guide for actual regulation (as in the US and the EU).
Figure 1: Current regulatory initiatives for AI
Source: Chartis Research
A team effort is needed
To develop accurate, specific regulatory frameworks, everyone involved in shaping AI projects in financial services – vendors, users and regulators – should be attempting to reach a consensus about the acceptable trade-offs (such as AI tools’ efficiency versus their reliability). So far, however, there is confusion in the AI landscape. Many companies claim to use AI when in fact they don’t, while the use of advanced AI algorithms without explanatory frameworks in other initiatives can cause misunderstandings and a lack of transparency.
Vendors in financial services should develop explainable AI tools, and collaborate with legislators (attending consultations and workgroups, for example) to help them move to ‘glass box’ AI and create the conditions in which AI can thrive. AI developers know where problems can arise and what successful projects require, and they should seek to establish a dialogue with supervisory bodies to communicate that information.
Regulators need a specific but flexible framework that can be applied to continuously evolving algorithms. Key to this will be properly defining AI. This will allow developers to advance the technology while minimizing its incorrect usage, and attract investors while simultaneously protecting AI users from unforeseen errors. To do this regulators will most likely need to develop principles-based regulation.
Vital to all these efforts is a taxonomy of statistical techniques in financial services, to guide and link regulators, developers and investors through the many AI techniques and their applications. This can enable them to develop high-performance tools while establishing the capabilities to spot and verify malpractice (such as violation of the General Data Protection Regulation [GDPR]).
Toward an AI taxonomy
Chartis believes that only after establishing a taxonomy and best practices for implementing AI tools can we reach a shared understanding of what AI actually is and how it should be used most effectively. Once these elements are established, industry players can move toward a framework in which to develop AI regulations.
Based on its view that AI is a collection of varied statistical techniques, such as evolutionary programming and ML, Chartis is currently developing a taxonomy for its STORM (Statistical Techniques, Optimization and Risk Management) research program. This project aims to unify ‘quantitative’ methods, including AI algorithms, to enable vendors and financial institutions (FIs) to establish best practice in areas such as derivatives pricing, simulation, portfolio optimization, and performance. Chartis believes that this can help vendors migrate to an improved operating environment that appreciates the many nuances of statistical techniques; and help FIs make the right choices among the systems, services and tools increasingly available to them.
In establishing a standardized AI framework, regulators can help FIs develop sound analytics environments. Different statistical techniques can have very different underlying mechanics, and each presents its own risks and challenges. Many AI methods also fall into different category buckets (ML or evolutionary programming, for example, can be classified as statistical techniques and optimization techniques).
What’s more, there are often significant challenges for new analytics, and a host of myths and miscommunication around AI itself may confuse things further. That is why a taxonomy of statistical processes, like STORM, is necessary for a shared understanding of what quantitative approaches termed ‘AI’ actually are, and how they should be used most effectively. And such a taxonomy could be a meaningful first step toward comprehensive top-down AI regulation.
Points of View are short articles in which members of the Chartis team express their opinions on relevant topics in the risk technology marketplace. Chartis is a trading name of Infopro Digital Services Limited, whose branded publications consist of the opinions of its research analysts and should not be construed as advice.
If you have any comments or queries on Chartis Points of View, you can email the individual author, or email Chartis at [email protected].