AI Act Compliance

High-Risk AI · Annex III · EU AI Act
AI Act Compliance
for Credit Scoring Models

Credit scoring is classified as a high-risk AI use case under the EU AI Act (Annex III). StatDec's structured validation framework supports financial institutions in aligning their model validation and governance practices with evolving regulatory expectations.

Our approach builds on established model validation practices, extending them to provide:

  • Robust and well-performing models
  • Stability over time and across portfolio changes
  • Transparent and explainable model drivers
  • Consistent model behaviour across customer segments
  • Clear identification and assessment of differences in model outcomes
Annex III
High-Risk Classification
Aug 2026*
Compliance Deadline — see note
7
Validation Dimensions
3-in-1
AI Act · EBA · GDPR Art.22
Request a Readiness Assessment
Regulatory Context
Why credit scoring models are in scope

The EU AI Act classifies credit scoring systems under Annex III as high-risk AI systems — based on their purpose, not the underlying technology.

This means institutions must demonstrate that models are:

  • Appropriately governed throughout their lifecycle
  • Based on sound and representative data
  • Validated, monitored, and documented
  • Transparent and explainable in their decisioning

StatDec's framework addresses key requirements from:

  • EU AI Act (high-risk AI systems)
  • EBA GL/2020/06 (model risk management)
  • GDPR Article 22 (automated decisioning & transparency)

Supporting a consistent and efficient approach to model validation and governance.

Obligations
Key AI Act requirements for credit scoring models
Art. 9 Risk management across the model lifecycle
Art. 10 Data governance and representativeness
Art. 11 Technical documentation (Annex IV)
Art. 12 Transparency and logging
Art. 14 Human oversight mechanisms
Art. 15 Accuracy, robustness, and performance monitoring
⚠️ Prohibited practices (Art. 5)

Require careful assessment of variables and model design to avoid unintended reconstruction of protected characteristics.

Our Framework
A Structured Validation Framework Across the Model Lifecycle

StatDec's framework extends traditional model validation to assess how models behave in practice — across data, features, outputs, and decision outcomes.

01
Data Assessment

Evaluate training and validation datasets for representativeness, completeness, and potential sources of bias.

02
Feature Analysis

Review model inputs to ensure appropriate use, clear justification, and assessment of potential proxy effects.

03
Model Performance

Assess discriminatory power, calibration, and overall model performance.

04
Consistency Across Segments

Evaluate whether model performance and risk estimation are consistent across customer segments.

05
Outcome & Error Analysis

Analyse differences in approval rates, default rates, and error patterns across populations.

06
Monitoring & Stability

Design monitoring approaches covering performance, stability, and behaviour over time.

07
Documentation & Governance

Produce structured documentation supporting transparency, audit readiness, and regulatory review.

What You Receive
Deliverables

Structured outputs designed for validation, governance, and regulatory review:

📋
AI Act Readiness Assessment

Gap analysis of your model inventory against Annex III obligations — scope, gaps, and priority actions.

📊
Model Behaviour & Consistency Analysis

Full validation across the 7 dimensions with documented metrics, findings, and recommendations per model.

📁
Technical Documentation (Annex IV)

Structured documentation aligned with Annex IV expectations, suitable for regulatory review.

🔍
Feature Review & Proxy Risk Assessment

Systematic review of model inputs for appropriateness, justification, and potential proxy effects.

⚙️
Human Oversight Framework Design

Design of oversight mechanisms aligned with Art. 14 requirements and operational workflows.

📅
Monitoring & Validation Framework

Ongoing monitoring plan covering performance, stability, and outcome review over time.

Key Dates
Compliance Timeline
August 2024 In Force

AI Act entered into force. The 24-month clock for high-risk AI obligations started.

February 2025 Active Now

Prohibited practices (Art. 5) fully applicable — including restrictions on proxy variable use. Already enforceable.

August 2026* Key Deadline

Full high-risk AI obligations apply. Credit scoring models must be fully compliant — Annex IV documentation, validation, human oversight, and monitoring frameworks all required. * See regulatory note below.

August 2027 GPAI Rules

General-purpose AI model obligations fully apply. Relevant where LLMs or foundation models are used in any part of the credit decisioning process.

💡 Planning consideration

Given typical model validation cycles of 3–6 months, institutions planning for the August 2026 deadline should consider initiating their assessment in Q1 2026 at the latest. A gap assessment now will clarify the scope and sequencing of work required.

Discuss your model validation requirements

Talk to StatDec about your credit model inventory. We can help assess scope, identify gaps, and design an appropriate validation and governance approach.

Get in Touch
🕒 Regulatory Notice — Implementation Timeline & Scope
* TIMELINE
Compliance deadline under review

The current statutory deadline for high-risk AI system obligations is 2 August 2026. The European Commission's Digital Omnibus package (November 2025), subsequently adopted by the European Parliament, proposes to extend this to 2 December 2027 for Annex III high-risk systems including credit scoring. This proposal requires approval by the Council of the EU and is not yet law. The August 2026 deadline remains technically in force until the amended regulation is published. This page will be updated as the legislative process concludes.

SCOPE
Models already live are not automatically exempt

Under Article 111(2), high-risk AI systems placed on the market before the compliance deadline are not immediately subject to the Act's obligations — unless and until they undergo a substantial modification. For credit scoring models, this includes redevelopment, retraining on new data, addition of new input variables, expansion to a new portfolio or population, and characteristic realignment. Any such change triggers full compliance obligations from that point.

In practice, credit scoring models are routinely modified as part of normal lifecycle management. Institutions should not assume that a model currently in production will remain outside the Act's scope indefinitely. Early preparation reduces the risk of a compliance gap arising at the point of the next model change.