The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the first comprehensive legal framework regulating AI systems within the European Union. It establishes mandatory requirements for the design, development, deployment, and post-market monitoring of AI systems, with a particular focus on high-risk AI systems as defined in Annex III. The regulation entered into force on 1 August 2024 and introduces phased implementation deadlines to ensure safe, transparent, and trustworthy AI aligned with the EU Green Deal objectives.
EU AI Act Conformity Assessment Guide for High-Risk AI Systems
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) requires all providers of high-risk AI systems to perform a rigorous conformity assessment before placing their products on the EU market. This guide explains the step-by-step process to assess and document compliance, including risk classification, technical documentation, conformity assessment procedures, CE marking, and post-market obligations. Following this guide will help you avoid penalties of up to €15 million or 3% of global annual turnover for non-compliance.
This guide applies to all companies developing or deploying AI systems classified as high-risk under Annex III of the AI Act, including AI used in critical infrastructure, education, employment, law enforcement, migration, and democratic processes. It also covers AI embedded in products subject to existing EU harmonisation legislation listed in Annex II.
1. AI Risk Classification
Classifying your AI system correctly is the foundational step in conformity assessment. The AI Act defines four risk categories:
- Unacceptable risk (Prohibited AI practices): Includes social scoring by public authorities, real-time remote biometric identification in public spaces (with limited exceptions), and manipulation of vulnerable groups. These are banned from 2 February 2025.
- High-risk AI systems: Systems listed in Annex III, such as AI in critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice, and democratic processes.
- Limited risk AI: AI systems like chatbots and deepfakes that require transparency obligations only.
- Minimal risk AI: Systems such as spam filters and AI in video games with no specific obligations.
Only AI systems classified as high-risk are subject to the full conformity assessment and CE marking requirements detailed below.
2. Conformity Assessment Procedure for High-Risk AI Systems
The conformity assessment process is outlined in Article 43 of the AI Act and consists of the following mandatory steps:
- Classify the AI system against Annex III and check if it is embedded in products covered by harmonisation legislation in Annex II.
- Implement a Quality Management System (QMS) covering design, development, testing, and post-market monitoring activities.
- Prepare comprehensive technical documentation as per Annex IV, including:
- General description of the AI system
- Design specifications and architecture
- Training, validation, and testing data descriptions
- Performance metrics and risk management documentation
- Conduct the conformity assessment:
- For most high-risk AI systems, a self-assessment (internal control procedure) suffices.
- For AI systems related to biometric identification and critical infrastructure, a third-party notified body must perform the assessment.
- Register the AI system in the EU AI database to be established by the European Commission.
- Affix the CE marking and issue the EU Declaration of Conformity.
- Implement post-market monitoring to collect and analyse real-world performance data continuously.
3. Deadlines and Enforcement Timeline
The AI Act’s phased implementation schedule is critical for compliance planning:
| Obligation | Effective Date | Details |
|---|---|---|
| Prohibited AI practices ban | 2 February 2025 | All AI systems with unacceptable risk are banned from the EU market. |
| Obligations for high-risk AI systems | 2 August 2026 | Full conformity assessment and CE marking requirements apply. |
| Obligations for General Purpose AI (GPAI) models | 2 August 2025 | Specific transparency and risk management rules apply. |
4. Penalties for Non-Compliance
The AI Act enforces strict penalties to ensure compliance:
| Violation Type | Maximum Penalty | Reference |
|---|---|---|
| Use or placing on market of prohibited AI practices | Up to €35 million or 7% of global annual turnover, whichever is higher | Article 71(1) |
| Other violations (e.g., failure to conduct conformity assessment) | Up to €15 million or 3% of global annual turnover, whichever is higher | Article 71(2) |
Penalties are enforced by national market surveillance authorities and the European Artificial Intelligence Board.
5. Practical Compliance Checklist for High-Risk AI Providers
Use this checklist to ensure your AI system meets all conformity assessment requirements under the EU AI Act:
- Confirm AI system classification against Annex III and Annex II.
- Establish and document a Quality Management System (QMS) covering all lifecycle phases.
- Compile technical documentation per Annex IV specifications.
- Determine conformity assessment route (self-assessment or notified body).
- Complete conformity assessment and retain evidence.
- Register the AI system in the EU AI database upon its launch.
- Affix CE marking and issue EU Declaration of Conformity.
- Implement continuous post-market monitoring and update documentation accordingly.
- Train relevant personnel on compliance obligations and reporting procedures.
Truth Anchor: Regulation (EU) 2024/1689 entered into force on 1 August 2024, with high-risk AI conformity obligations effective from 2 August 2026. Penalties for prohibited AI practices can reach up to €35 million or 7% of global turnover (Article 71).
Frequently Asked Questions about EU AI Act Conformity Assessment
Q1: Does the EU AI Act apply to AI systems developed outside the EU?
Yes. The AI Act applies to any AI system placed on the EU market or used within the EU, regardless of where it was developed. Non-EU providers must comply with conformity assessment and registration requirements to sell or deploy high-risk AI in the EU.
Q2: What defines a high-risk AI system under the EU AI Act?
High-risk AI systems are those listed in Annex III, including AI used in critical infrastructure, education, employment, law enforcement, migration, and democratic processes. Additionally, AI embedded in products covered by harmonisation legislation in Annex II is also high-risk.
Q3: Is self-assessment sufficient for all high-risk AI systems?
No. While most high-risk AI systems can undergo a self-assessment conformity procedure, AI systems related to biometric identification and critical infrastructure require third-party notified body assessment.
Q4: What documentation must be prepared for conformity assessment?
Technical documentation must comply with Annex IV and include a general description, design and development details, training and testing data, performance metrics, and risk management reports.
Q5: When must the CE marking be affixed to high-risk AI systems?
The CE marking must be affixed only after successful conformity assessment and registration in the EU AI database, effective from 2 August 2026.
Q6: What are the consequences of non-compliance with the EU AI Act?
Non-compliance can lead to fines up to €15 million or 3% of global turnover for general violations, and up to €35 million or 7% of global turnover for prohibited AI practices, along with market withdrawal and reputational damage.
Start Your EU AI Act Conformity Assessment Now
Use our dedicated AI Act Conformity Assessment Tool to classify your AI system, generate required technical documentation templates, and guide you through the CE marking process. The tool provides step-by-step instructions, compliance checklists, and automated reporting to ensure you meet all Regulation (EU) 2024/1689 obligations before the 2 August 2026 deadline.