The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is a comprehensive legal framework adopted by the European Union to regulate the design, development, and deployment of artificial intelligence (AI) systems within the EU single market. It establishes harmonised rules to ensure AI systems are safe, respect fundamental rights, and promote trustworthy AI innovation. The regulation applies to providers, users, importers, and distributors of AI systems, aiming to mitigate risks associated with AI while fostering innovation and market uptake.
EU Artificial Intelligence Act (EU AI Act) Compliance Guide
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the first-ever legal framework specifically targeting AI technologies in the European Union. Adopted by the European Parliament and Council on 15 May 2024 and published in the Official Journal on 30 May 2024, it will enter into force on 1 January 2025. The regulation aims to ensure that AI systems placed on the EU market are safe, transparent, and respect fundamental rights as enshrined in the EU Charter of Fundamental Rights and the Treaty on the Functioning of the European Union.
This guide provides a detailed overview of the regulation’s scope, key definitions, obligations for stakeholders, compliance timelines, penalties for non-compliance, and practical steps to achieve conformity with the EU AI Act.
Legal Basis and Scope
The legal basis for the EU AI Act is Article 114 of the Treaty on the Functioning of the European Union (TFEU), which allows harmonisation measures for the internal market. The regulation applies to:
- Providers placing AI systems on the EU market or putting them into service, regardless of whether they are established in the EU or third countries;
- Users of AI systems within the EU;
- Importers and distributors of AI systems;
- AI systems used or intended to be used in the EU, including those developed outside the EU but deployed within the EU market.
It covers AI systems as defined in Article 3(1) of the regulation, including software developed with machine learning, logic- and knowledge-based approaches, and statistical methods.
Key Definitions from Regulation (EU) 2024/1689
| Term | Definition (Plain English) | Article Reference |
|---|---|---|
| Artificial Intelligence System (AI system) | Software developed using techniques such as machine learning, logic-based approaches, or statistical methods that can generate outputs influencing environments or decisions. | Article 3(1) |
| Provider | Any natural or legal person who develops an AI system or places it on the EU market under their own name or trademark. | Article 3(8) |
| User | Any natural or legal person who uses an AI system under their authority, excluding the provider. | Article 3(9) |
| High-Risk AI System | AI systems that pose significant risks to health, safety, or fundamental rights, subject to strict compliance requirements. | Article 6 |
| Conformity Assessment | Process to verify that an AI system meets the requirements of the regulation before being placed on the market. | Article 43 |
| Post-Market Monitoring | Ongoing surveillance of AI systems after deployment to detect and mitigate risks. | Article 61 |
Obligations Under the EU AI Act
The regulation imposes a risk-based approach with obligations differentiated by the AI system’s risk category:
1. Prohibited AI Practices
Article 5 prohibits AI systems that:
- Deploy subliminal techniques to materially distort behaviour causing harm;
- Exploit vulnerabilities of specific groups (e.g., children, disabled persons) to cause harm;
- Use real-time biometric identification in public spaces for law enforcement, except under strict conditions;
- Deploy AI for social scoring by public authorities.
2. High-Risk AI Systems
High-risk AI systems, listed in Annex III, include AI used in:
- Critical infrastructure (e.g., transport, energy);
- Education and vocational training affecting access to education or professional advancement;
- Employment, workers management, and access to self-employment;
- Essential private and public services (e.g., credit scoring, social benefits);
- Law enforcement and migration;
- Administration of justice and democratic processes.
Providers of high-risk AI systems must:
- Conduct a conformity assessment before placing the system on the market;
- Implement a risk management system to identify and mitigate risks;
- Ensure high levels of data quality and documentation;
- Maintain transparency and provide clear instructions to users;
- Establish a post-market monitoring system;
- Register the AI system in the EU database managed by the European Commission.
3. Limited Risk AI Systems
These systems require transparency obligations such as informing users they are interacting with AI (e.g., chatbots). Examples include AI-generated content or emotion recognition systems.
4. Minimal Risk AI Systems
These systems are largely unregulated under the EU AI Act, allowing free use without additional requirements (e.g., AI-enabled video games or spam filters).
Compliance Timeline
| Date | Milestone | Details |
|---|---|---|
| 30 May 2024 | Publication in Official Journal | Regulation (EU) 2024/1689 published and officially adopted. |
| 1 January 2025 | Entry into Force | Regulation becomes applicable; providers must begin compliance activities. |
| 1 July 2025 | EU AI System Database Operational | European Commission launches the public database for high-risk AI system registration. |
| 1 January 2026 | Mandatory Conformity Assessments | All high-risk AI systems must have completed conformity assessments before market placement. |
| 1 January 2027 | Enforcement Begins | National authorities begin active enforcement and market surveillance. |
Penalties and Enforcement
Non-compliance with the EU AI Act can lead to significant penalties enforced by national market surveillance authorities. The regulation establishes a tiered penalty system based on the nature and severity of the infringement:
| Type of Infringement | Maximum Fine | Additional Enforcement Measures |
|---|---|---|
| Non-compliance with prohibition of certain AI practices (Article 5) | Up to €30 million or 6% of global annual turnover, whichever is higher | Market withdrawal, bans on placing AI system on the market |
| Failure to conduct conformity assessment for high-risk AI systems | Up to €20 million or 4% of global annual turnover | Corrective actions, suspension of sales |
| Failure to register high-risk AI systems in the EU database | Up to €10 million or 2% of global annual turnover | Administrative fines, public warnings |
| Failure to provide required transparency information | Up to €5 million or 1% of global annual turnover | Orders to provide information, corrective measures |
Enforcement is carried out by national competent authorities designated by each Member State, coordinated by the European Artificial Intelligence Board established under the regulation.
Key Articles in Plain English
Article 3 – Definitions
This article defines all key terms used throughout the regulation, including what constitutes an AI system, provider, user, high-risk AI system, and other relevant concepts.
Article 5 – Prohibited AI Practices
Specifies AI practices that are banned outright due to unacceptable risks to safety or fundamental rights.
Article 6 – Classification of High-Risk AI Systems
Lists categories of AI systems considered high-risk and subject to strict requirements.
Article 10 – Obligations of Providers
Details the responsibilities of AI system providers, including risk management, data governance, documentation, and transparency.
Article 43 – Conformity Assessment
Describes the procedures providers must follow to verify their AI systems comply with the regulation before market placement.
Article 61 – Post-Market Monitoring
Requires providers to continuously monitor AI systems after deployment to identify and mitigate emerging risks.
Article 71 – Penalties
Sets out the fines and enforcement mechanisms for breaches of the regulation.
How to Achieve Compliance
Compliance with the EU AI Act requires a structured approach:
- Identify whether your AI system falls under the regulation’s scope and risk categories.
- Classify the AI system as high-risk, limited risk, or minimal risk according to Annex III and Article 6.
- Conduct a conformity assessment for high-risk AI systems, including risk management and technical documentation.
- Register high-risk AI systems in the EU AI system database before placing them on the market.
- Implement transparency measures for limited risk AI systems, such as user notifications.
- Establish a post-market monitoring plan to detect and address risks during the AI system’s lifecycle.
- Prepare for audits and inspections by national authorities by maintaining thorough documentation.
Failure to comply can result in severe penalties and market restrictions, making early and thorough compliance essential.
Truth Anchor: Regulation (EU) 2024/1689 was published in the Official Journal of the European Union (OJ L 150, 30.5.2024) and will be enforceable from 1 January 2025. Penalties for non-compliance reach up to €30 million or 6% of global turnover under Article 71.
Frequently Asked Questions
What types of AI systems are considered high-risk under the EU AI Act?
High-risk AI systems are those listed in Annex III of Regulation (EU) 2024/1689, including AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice systems. These systems require strict compliance measures such as conformity assessments and registration.
Does the EU AI Act apply to AI systems developed outside the EU?
Yes. The regulation applies to any AI system placed on the EU market or used within the EU, regardless of where it was developed. Providers outside the EU must appoint an authorised representative within the EU to ensure compliance.
What are the main obligations for providers of high-risk AI systems?
Providers must perform a conformity assessment, implement risk management systems, ensure data quality, maintain technical documentation, provide transparency to users, register the AI system in the EU database, and conduct post-market monitoring.
When does the EU AI Act become enforceable?
The regulation enters into force on 1 January 2025, with full enforcement and penalties applicable from 1 January 2027. Providers should begin compliance preparations immediately to meet these deadlines.
What penalties can be imposed for non-compliance?
Penalties range from fines of up to €5 million or 1% of global turnover for transparency failures, up to €30 million or 6% of global turnover for prohibited AI practices. Additional measures include market withdrawal and sales bans.
Are there any AI practices prohibited outright by the regulation?
Yes. AI systems that manipulate behaviour subliminally, exploit vulnerable groups, use real-time biometric identification in public spaces for law enforcement without exceptions, or enable social scoring by public authorities are banned under Article 5.
How can I start the compliance process for my AI system?
Begin by classifying your AI system’s risk level, then conduct a conformity assessment if it is high-risk. Register the system in the EU database and implement required transparency and monitoring measures. Use specialised compliance tools to guide you through each step.
Ready to ensure your AI systems comply with the EU Artificial Intelligence Act? Use our AI Act Compliance Checker Tool to assess your obligations, identify risk categories, and receive tailored action plans. Clicking the link will open the tool where you can input your AI system details and get an immediate compliance roadmap.