
February 18, 2026 • 12 min read
What is the EU AI Act?
The EU AI Act establishes a detailed legal framework to ensure that AI systems deployed within the EU are safe, ethically sound, and respect fundamental rights such as privacy, non-discrimination, and individual autonomy. The framework is designed to be comprehensive. EU AI Act compliance requires addressing not only the technical aspects but also the ethical and societal implications of AI. The EU AI Act is poised to be the world’s first comprehensive AI law, setting stringent standards for AI systems.
1. Creation of a risk-based approach
- The Act introduces a risk-based categorization of AI systems to regulate them based on their potential impact on individuals and society.
- This approach ensures that high-risk AI applications are subject to rigorous scrutiny, while lower-risk applications can thrive with minimal regulatory burden, fostering innovation and trust in AI.
- The AI Act risk categories are: Unacceptable Risk (Prohibited), High Risk, Limited Risk, and Minimal Risk.
2. Strict and significant penalties for non-compliance
- The high financial penalties underscore the importance of compliance.
- For the most serious breaches, such as the use of prohibited AI practices, the maximum penalty is the greater of €35 million or 7% of the company’s worldwide annual turnover.
- For less severe violations, such as failing to meet transparency or documentation requirements for high-risk AI systems, fines can reach up to €15 million or 3% of the total worldwide annual turnover.
3. Creation of distinct stakeholder groups with distinct responsibilities
- Providers
- Organizations that develop, supply, or market AI systems or models under their own brand.
- Responsible for ensuring that AI systems comply with the EU AI Act’s regulations, including safety, transparency, and documentation requirements.
- Required to ensure AI literacy among their own staff
- Deployers
- Entities that use AI systems within the EU for various applications.
- Must adhere to transparency obligations, manage AI-related risks, and ensure that users are informed when interacting with AI systems.
- Required to ensure AI literacy among their own staff
- Importers
- Organizations established in the EU that bring AI systems into the EU market from non-EU providers.
- Ensure that imported AI systems comply with the EU AI Act, verify provider obligations, maintain conformity documentation, and ensure proper labeling and instructions.
- Distributors
- Entities in the supply chain that make AI systems available on the EU market, excluding providers and importers.
- Verify the conformity of AI systems with EU regulations, maintain necessary documentation, keep records of AI systems, and report non-compliance or risks to national authorities.

EU AI Act scope: Who does the AI Act apply to?
Under the EU AI Act, compliance is required for the following entities:
- Providers: Entities that develop, market, or deploy AI systems within the EU.
- Users: Entities using AI systems in the EU, particularly those utilizing high-risk AI systems.
- Importers: Entities that import AI systems into the EU market.
- Distributors: Entities that distribute or sell AI systems within the EU.
- Third-Party Evaluators: Entities that conduct conformity assessments of AI systems.
- Non-EU Providers: Providers established outside the EU if their AI systems are used in the EU market or affect people within the EU.
These stakeholders must adhere to the regulatory requirements based on the risk level of the AI systems they handle.

EU AI Act compliance requirements
The EU AI Act sets forth specific compliance requirements for AI systems based on their risk classification. Here’s a summary of the key compliance requirements for each risk category:
EU AI Act risk categories
Starting on February 2, 2025, providers and deployers of AI systems (no matter the risk level) must ensure that their staff or others using an AI system on their behalf have a sufficient level of AI literacy given their roles, technical knowledge, experience, education, training, the context in which the AI systems are used, and the subjects of that use. Although the exact nature of this requirement varies widely across organizations, the European AI Office has published the “Living Repository of AI Literacy Practices,” featuring examples from over a dozen organizations to “encourage learning and exchange.”
Unacceptable risk
Starting on February 2, 2025, AI systems in this category are banned from being placed on the market or put into service within the EU. Examples of prohibited uses include social scoring, biometric identification, and classification of individuals, or cognitive behavioral manipulation that exploits vulnerabilities such as age or disability.
High risk
- Risk Management: Implement a risk management system to identify, analyze, and mitigate risks throughout the AI system’s lifecycle.
- Data and Data Governance: Ensure that training, validation, and testing datasets are high-quality, relevant, representative, and free of biases.
- Technical Documentation: Maintain detailed documentation that provides information on the AI system, including its design, development, and operation.
- Transparency and Information Provision: Provide users with clear and understandable information about the AI system’s capabilities, limitations, and appropriate use.
- Human Oversight: Implement appropriate measures to enable effective human oversight to prevent or minimize risks.
- Accuracy, Robustness, and Cybersecurity: Ensure high levels of accuracy, robustness, and cybersecurity for the AI system.
- Conformity Assessment: Conduct conformity assessments before placing the AI system on the market, involving third-party evaluations for some systems.
- Registration: Register high-risk AI systems in the EU’s dedicated database before deployment.
Limited risk
Transparency Obligations: Ensure that users know they are interacting with an AI system (e.g., chatbots must disclose they are AI).
Minimal risk
Voluntary Codes of Conduct: Entities are encouraged to adhere to voluntary codes of conduct to promote best practices, though there are no mandatory requirements.
The high-risk category under the EU AI Act imposes the most compliance requirements on organizations developing, deploying, importing, or distributing AI in the EU. These requirements aim to ensure that AI systems deployed in the EU are safe, transparent, and respect fundamental rights.
EU AI Act penalties and fines
The EU AI Act outlines significant penalties for non-compliance to ensure adherence to its regulations. The penalties are designed to be stringent to deter violations and ensure accountability. Here are the key penalties:
1. For placing on the market, putting into service, or using AI systems that pose an unacceptable risk: Fines up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
2. For non-compliance with the requirements related to high-risk and limited-risk AI systems, such as data quality, technical documentation, transparency obligations, human oversight, and robustness: Fines up to €15 million or 3% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
3. For providing incorrect, incomplete, or misleading information to notified bodies and national competent authorities: Fines up to €7.5 million or 1% of the total worldwide annual turnover of the preceding financial year, whichever is higher.

These penalties aim to ensure that all entities involved in the development, deployment, and use of AI systems within the EU comply with the strict requirements and standards set out by the AI Act, thereby promoting safe and trustworthy AI.
Enforcement timeline: When does the AI Act go into effect?

Enforcement of the Act is phased, beginning on August 1st, 2024, when the Act enters into force. The “General applicability” of the EU AI Act is considered to be August 2nd, 2026, although certain provisions apply as early as February 2025.
- February 2nd, 2025 (after six months): The requirement that providers and deployers ensure the “AI literacy” of their staff, as well as the ban on prohibited or “unacceptable risk” AI applications, go into effect.
- August 2nd, 2025 (after 12 months): The requirements for general-purpose AI systems (which include generative AI models) become applicable
- August 2nd, 2026 (after 24 months): All rules of the AI Act become applicable, including obligations for high-risk systems defined in Annex III. This deadline is significant as it encompasses a wide range of high-risk systems, including, but not limited to:
- HR Technology: Systems such as those used to source, screen, rank, and interview job candidates.
- Financial Services: Systems used for credit scoring, anti-money laundering (AML), and fraud detection.
- Insurance Underwriting: Systems such as those used for calculating premiums, assessing risks, and determining coverage.
- Education and Vocational Training: Systems determining access to educational opportunities, personalized learning plans, and performance assessments.
- Many more are outlined in Annex III.
These examples illustrate just a few of the many high-risk AI systems affected by the Act. The breadth of these regulations underscores the critical need for stringent compliance measures to ensure ethical and safe usage across various sectors. To book a demo for AuditBoard's AI governance platform, click here.
You may also like to read


What is the NIST AI Risk Management Framework?

What is AI governance, and why does it matter?

What is the Colorado AI Act? A detailed guide to SB 205

What is the NIST AI Risk Management Framework?

What is AI governance, and why does it matter?
Discover why industry leaders choose AuditBoard
SCHEDULE A DEMO



