Why is AI Explainability important for regulated industries? What are some of the challenges that such sectors may face if it is not in place? This article discusses some of these aspects.

Intelligent Decision Systems have become widely prevalent in recent days due to their ability to take faster decisions through learning of the systematic patterns. In building efficient solutions, more complex models are being introduced daily. Though these complex models are effectively helping in the decision-making process, understanding the black box structure of the complex model has become a matter of concern.
To bring interpretability and clear understanding of AI models, three crucial factors should be considered by the model developers [1]:

  • Understandable data distribution
  • Understandable algorithmic functions
  • Interpretable prediction rules to check compliance with Business regulations and standards

Explainable AI is the concept introduced to bring in these three factors for complex AI models used in real-time predictive modeling tasks.

The main reason is ‘To Improve Trust and transparency of AI decisions’

In highly regulated industries like the banking industry, the decision of the machine learning models impacts real-life entities. Due to the sensitivity of the decisions and the far-reaching effects they may have, it is especially important to understand how a decision is derived by a black box model. It is recommended to understand the step-by-step treatment that the input data undergoes to arrive at the decisions impacting the real-world entities. According to Van Den Berg et al., the current complex Blackbox models in the financial sector are not easily interpretable for stakeholders [2]. Hence this work has highlighted the need for an explainable framework for the sake of stakeholders in commercial institutions like Banks to obtain clearance from regulators and policy makers for the usage of such AI models.

To understand the need of XAI in finance sector, a research study [2] examined three use cases in which AI was used in financial institutions in the Netherlands. This work had collected data through semi structured interviews with interviewees from both banks and supervisory authorities. The necessity of XAI in the models touching use cases like Consumer Credit, Credit Risk and Anti-Money laundering were discussed.

  • Consumer credit – Regarding consumer credit, the use of simple AI models in combination with traditional models is a frequent practice. To ensure compliance with lending standards, banks must adhere to clear guidelines set by supervisory authorities that dictate what is and is not allowed when issuing loans to consumers. When the AI model’s results differ from those produced by traditional models, an XAI framework can be useful in explaining these deviations.
  • Credit Risk – Credit risk management involves assessing a bank’s internal risk and capital requirements, focusing specifically on their mortgage lending risk portfolio. Simple AI models can be used to determine the probability of default for each consumer, but only for internal risk calculations. Communication with consumers is unnecessary, and the primary stakeholders are LOD1 and supervisory authorities. While a more complex model may be more effective, financial institutions must comply with regulations like CRR (Capital Requirements Regulation), which require the use of transparent models.
  • AML (Anti Money Laundering) – To comply with the AML and CTF (Counter Terrorism Financing) Act, banks often use an AI system in combination with a rule-based transaction monitoring system to detect fraudulent activity and minimize false positives. The use of a more complex AI model can significantly improve performance. The results of the AI model are then provided to AML investigators for further analysis. An XAI framework can be useful in managing any associated risks and deploying complex models in a controlled manner. Additionally, different stakeholders may have varying requirements for explainability, including technical explainability, which details how inputs relate to outputs, and process explainability, which covers the entire end-to-end system and data quality responsibility.


  • The XAI should be able to increase trust among the stakeholders by providing them with proper explanations as per their convenience. [4]
  • When there is a drift in the data due to external conditions, there will be a change in the patterns of the prediction model. Hence, the model explanations may not be valid over time due to these changes. This might give rise to a necessity of validation and communication of the changes in every drift. [4]
  • Easily interpretable models could make it easy for hackers to understand the nature of features used in the model, which might pave the way for launching adversarial attacks to bring down the performance of the models.
  • Explainability might help intruders to understand the nature of the prediction models, which could lead to leakage of proprietary procedures. By reverse engineering the rules, these intruders might find loopholes to bypass the scrutiny posed by the Banks and regulators.
  • To bring about a complete understanding of AI modeling processes at all levels, there would be a requirement to upgrade the knowledge of the talent pool with the modeling technologies. [3]


  • Building a sophisticated framework for the validation of the explainability and code compliance
  • Creation of more domain-specific explainable modelling tools
  • Using generative AI framework for the interactive explainability of the models and rules to enable customized explanations for stakeholders based on their expertise in the domain and language preference.

[1] https://neo4j.com/blog/ai-graph-technology-ai-explainability/
[2] van den Berg, Martin & Kuiper, Ouren. (2020). XAI in the Financial Sector. A Conceptual Framework for Explainable AI (XAI).
[3] https://www2.deloitte.com/us/en/insights/industry/financial-services/explainable-ai-in-banking.html
[4] de Bruijn, H., Warnier, M., & Janssen, M. (2022). The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making. Government Information Quarterly, 39(2), 101666.

Leave a Reply