The State of AI in Risk Management

Spotlighting collaborative research between Chartis and our research partner Tata Consultancy Services (TCS), the report analyses the adoption of AI tools by risk departments across financial services.

Among financial institutions (FIs), the term ‘artificial intelligence’ (AI) is no longer just a buzzword. AI has become an important tool with use cases in a variety of financial-services contexts. In this report, we explore the current state of AI in risk and compliance, examining several key themes:

  • The overall maturity of AI tools.
  • How AI maturity looks in different contexts (e.g., across different types of institution).
  • The ways in which AI tools are used across the risk and compliance value chain.

In this report we argue that the level of maturity of AI use varies considerably across FIs, both by type and at business-line level. With few exceptions, we find that the financial industry is still playing ‘catch up’ in AI terms. For many firms, the experimental AI phase is ongoing, with practical use cases still emerging. Even in the many larger institutions with more experience of AI, today’s projects are likely to be the first in which AI is being deployed at scale, and in a broad range of use cases across organizational silos.

The application of AI tools also varies considerably by use case. For example, AI is relatively widespread in the area of data management, where specific tools (such as machine learning [ML], natural language processing [NLP], and graph analytics [GA]) have proved particularly suited to certain applications. To leverage data-driven projects effectively, however, institutions must have access to the right sources of data and the right expertise to manage it. 

FIs in all market segments are making effective use of third-party AI applications; for example:

  • Exploiting alt-data in capital markets and investment management to map the terms of loans and bonds into structured databases. 
  • Exploiting alt-data and media data (both traditional and social media) to drive credit risk review triggers and remedial actions. 
  • Leveraging a variety of external data (alt-data, vendor enriched data sets, social media data, etc.) for client screening in financial crime risk management.
  • Leveraging historical data for regulatory risk analytics.
  • Using neural networks in the preparation of data to leverage credit scoring models, or using supply chain, social media and other alt-data in credit analysis.
  • Embedding the AI used to map and classify customer and counterparty behavior for behavioral analytics modeling. In areas such as credit scoring, there are regulatory challenges in the direct usage of customer profiling and behavioral analytics. However, more indirect uses – for example, in areas such as financial crime controls, behavioral analytics for asset and liability management (ALM) and balance sheet management, or the embedding of behavioral models in securities pricing and trading – have not seen comparable challenges or issues. 

Indeed, segmentation and behavioral analytics are emerging as some of the strongest candidates for the practical, real-world application of AI in risk management – both are data-intensive, and both carry a relatively low risk of failure. A key theme emerging from our research has been that, in situations where analytics require highly dimensional, multi-parameter classification mapping or optimization against fuzzy or highly non-linear variables, AI applications work well. This is also especially true when they are used for internal analysis rather than for regulatory compliance or reporting purposes. And, when AI is applied to internal analysis, we found that the level of depth and maturity of its usage is typically higher.

At this stage in the development cycle of AI, we believe that the popularity of certain applications is dictated by two key factors:

  • What the application can do.
  • The level of regulatory incidence.

We believe it is likely that both drivers will provide the foundations for a broader and more complex set of applications in the future. 

To explore some of these issues in more detail, Chartis Research and Tata Consultancy Services (TCS) undertook a joint research project on adoption trends in the use of AI in risk management and regulatory compliance. This unique project has enabled us to develop an AI adoption roadmap for risk management, highlighting key approaches for the future success of AI projects.

Our study consisted of a quantitative survey of 101 industry participants, together with 65 targeted interviews with senior risk and compliance decision makers operating in this space. This body of research sheds new light on the adoption journey that many FIs are taking with AI, and the pitfalls and successes they are encountering along the way. Our research also includes a detailed view of successful AI strategies and areas of implementation, which readers will find in Section 5 of this report. 

1  ‘Regulatory incidence’ – refers to the prevalence of regulatory oversight and sanctions. 

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@chartis-research.com to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Chartis Research? View our subscription options

You need to sign in to use this feature. If you don’t have a Chartis account, please register for an account.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here.