Artificial Intelligence in Banking:

From Challenger Models to the Validation of High-Risk Production Models

Artificial Intelligence in Banking – A Paradigm Shift

Artificial intelligence (AI) is increasingly revolutionizing the banking sector. While many institutions recognize the potential value, they face significant challenges: regulatory requirements, data protection, and integration into existing processes. Since the introduction of powerful AI models like ChatGPT, access to AI technologies has become more straightforward – and with it, the opportunities for banks.

From automating administrative processes to highly complex models for risk management: AI applications offer enormous potential. Particularly in model-based risk assessment and compliance, machine learning (ML) opens up entirely new possibilities.

In this article, we examine the perspectives of three institutions on the use of artificial intelligence (AI) and machine learning (ML) in the banking sector. This is based on the following authoritative publications:

  • The Artificial Intelligence Act (AI Act) of the European Union (EU)
  • The discussion paper of the European Banking Authority (EBA)
  • The principles paper “Big Data and Artificial Intelligence” by the Federal Financial Supervisory Authority (BaFin)

At the end of this article, we develop a practical approach for introducing AI and ML methods into banks’ risk models and provide a clear recommendation for their implementation.

ML in Banking Regulation: The EBA Perspective

The European Banking Authority (EBA) recognizes machine learning (ML) as a valuable complement to internal ratings-based (IRB) models for credit risk assessment. ML offers the opportunity to improve existing approaches, increase predictive power, and simultaneously meet the requirements for robust governance and explainability. In its discussion paper (2021) and follow-up report (2023), the EBA highlights several key advantages:

1. ML as a Challenger Model

The use of ML as a challenger model provides banks with a secure environment to analyze the strengths and weaknesses of existing IRB models.

  • ML models enable automated testing routines that can continuously evaluate conventional models and identify weaknesses.
  • This approach not only builds confidence in existing models but also lays the foundation for a gradual introduction of ML into productive processes.

2. Optimization of Model Validation

ML models can address conventional weaknesses of traditional approaches through their higher flexibility and non-linear data processing:

  • They identify complex relationships in data that are difficult to capture with linear models.
  • By comparing ML and classical models, institutions can identify potential biases or misclassifications early and correct them systematically.
  • This capability for precise analysis strengthens not only model validation but also supervisory authorities’ confidence in model quality.

3. Automated Compliance

A key advantage of ML lies in the ability to fulfill regulatory requirements more efficiently:

  • ML models can largely automate the documentation of model decisions, saving time and resources.
  • They enable real-time monitoring of models to immediately detect potential violations of regulatory requirements or thresholds.
  • Particularly for reporting obligations in credit risk assessment, ML provides significant relief as data can be prepared faster and more accurately.

4. Challenges and Regulatory Requirements

Despite the numerous advantages, the EBA also emphasizes the challenges of using ML in credit risk management:

  • Explainability and Transparency: The results of ML models must be comprehensible to internal and external stakeholders. Methods such as Shapley values and LIME offer practical approaches here.
  • Governance and Control: The integration of ML requires robust governance structures to ensure that models are correctly applied and monitored.
  • Gradual Introduction: The EBA recommends a phased implementation of ML approaches, starting with challenger models, to minimize risks.

In summary, the EBA sees ML techniques not only as an opportunity to improve credit risk models but also as an instrument to sustainably increase the efficiency and accuracy of risk management processes. However, banks should ensure that the use of ML technologies is consistent with regulatory requirements, particularly regarding transparency, governance, and explainability. A structured and risk-based approach is essential here

The BaFin Perspective: Principles for the Use of AI

BaFin’s principles paper on Big Data and AI outlines specific guidelines for the financial industry. The most important findings:

1. Management Responsibility

Management must develop a clear strategy for the use of AI. This includes:

  • IT competence in the leadership team
  • Risk analysis throughout the entire decision-making process
  • Establishment of a company-wide model risk management framework

2. Data Strategy and Governance

The quality and quantity of data are crucial for model quality. Companies must:

  • Implement procedures to ensure data quality
  • Establish mechanisms to avoid bias
  • Ensure data protection compliance in accordance with GDPR

3. Transparency and Validation

Algorithms must be reproducible, robust, and well-documented. Validation should be conducted regularly and independently to ensure model reliability.

4. Human Control

The principle of “human in the loop” remains central. Human experts must be involved in decision-critical processes to minimize risks.

Risk Models in the Context of the AI Act

With the AI Act, the European Union has created a clear regulatory framework for the use of AI. The phased introduction until 2027 brings comprehensive requirements for high-risk AI systems, which may include IRB models. The classification of risk models as high-risk AI systems under the AI Act presents banks with new regulatory requirements but simultaneously opens up opportunities for systematic, transparent, and secure model implementation.

Regulatory Requirements for Risk Models in the AI Act

The AI Act provides strict requirements for high-risk AI systems, which can be divided into the following key areas:

1. Quality Management and Documentation (Article 17))

  • Banks must establish an AI-specific quality management system that ensures models are robust, reliable, and auditable.
  • Detailed technical documentation of all risk models is required to make their structure, decision-making processes, and adjustments traceable.
  • Automated logging of model changes ensures transparency for internal and external auditors.

2. Data Quality and Bias Controls (Article 10)

  • Representative and error-free data are essential for model quality. Biased or incomplete datasets can lead to unreliable predictions and discriminatory results.
  • Banks must implement mechanisms to detect potential biases in training data early. This includes:
    • Statistical methods for bias detection
    • Regular review of data representativeness
    • Application of fairness metrics

3. Transparency and Explainability (Article 13)

  • High-risk AI systems used for credit risk assessment must be comprehensible.
  • Banks must make model decisions explainable – a central challenge with complex machine learning algorithms.
  • Methods such as SHAP values, LIME, or Partial Dependence Plots can be used to visualize the effect of individual factors on model decisions.

4. Risk Management and Human Oversight (Article 9)

  • Continuous monitoring of model performance is mandatory to ensure that risk models continue to function reliably after implementation.
  • Clearly defined escalation processes must be established in case deviations or unexpected behaviors occur.
  • Human control remains an essential component of risk management. Automated decisions must not be made without human review.

In its discussion paper on the use of machine learning in IRB models, the European Banking Authority (EBA) emphasizes that AI-supported approaches must meet the same high requirements regarding governance, traceability, and validation as conventional modeling methods. In the follow-up report from August 2023, the EBA also examines the interactions between existing banking supervisory law and the proposed AI Act. It concludes that many of the requirements formulated in the AI Act are already covered by existing regulations, but suggests clarifying additions to avoid legal uncertainties and prevent unintended regulatory effects. For banks, this results in the task of integrating modern AI technologies into their existing model infrastructure in a compliant and seamless manner.

Conclusion: Seizing Opportunities, Mastering Challenges

The integration of AI into risk models offers banks both opportunities and challenges. The following aspects are particularly relevant:

  1. Structured Validation: AI-supported challenger models enable efficient review of existing risk models.
  2. Data Management: Integrated data governance is essential to avoid biases in training data.
  3. Risk Management: Automated monitoring complemented by human control mechanisms ensures robust risk management.

Clear documentation and explainability of models create trust with supervisory authorities and enable long-term integration of AI into banking processes.

Opportunities for Banks: Why Classification as a High-Risk AI System Also Has Advantages

Although the classification of risk models as high-risk AI systems initially means additional regulatory effort, it also offers banks clear advantages:

Legal Certainty: The clear requirements of the AI Act create a uniform regulatory framework that reduces uncertainties in model implementation.

Improved Model Quality: Through strict documentation and data quality requirements, banks benefit from more stable and reliable models.

Increased Acceptance by Regulatory Authorities: Transparent and explainable models facilitate communication with supervisory authorities and can accelerate the validation process.

Sustainable Integration of AI Technologies: Systematic compliance with AI Act requirements enables long-term viable use of machine learning in risk management processes.

How Banks Can Benefit from AI

Through targeted implementation of governance, monitoring, and explainability mechanisms, financial institutions can not only meet legal requirements but also improve the quality and efficiency of their risk models.

For the implementation of artificial intelligence and machine learning in banks, a three-phase strategy is recommended:

  1. ML as a Challenger Model in Validation – Early success with low risk
  2. Gradual Integration into Production Processes – Adaptation of governance structures
  3. AI as an Integral Part of Risk Management – Automated compliance & innovation

As a specialized management consultancy, we support banks in the regulatory-compliant implementation of ML models – from strategy development to sustainable integration into existing processes.

Would you like to learn more about AI-supported model validation? Contact us for a non-binding consultation.