Navigating New Regulations for AI in the EU

March 24, 2025

Navigating New Regulations for AI in the EU

The EU AI Act is a transformative regulatory framework for artificial intelligence that paves the way for responsible innovation. By adopting a risk-based approach, it categorizes AI systems to ensure compliance while protecting essential rights. This legislation fosters ethical and transparent AI deployment and establishes a powerful model for global governance in the AI landscape.

Introduction to the EU AI Act

Purpose and Scope of the EU AI Act

The EU AI Act, proposed by the European Commission, establishes a legal framework to regulate artificial intelligence systems within the European Union. With AI becoming integral to various sectors, the Act aims to address associated risks while maximizing its benefits. Central to the regulation is a risk-based approach, which categorizes AI systems based on their potential societal impact, ensuring tailored governance that balances innovation and safety.

The EU AI Act emphasizes a risk-based framework to protect citizens, promote trustworthy AI, and foster innovation within the European Union.

What Are The Key Elements of the EU AI Act?

Prohibited AI Practices

The EU AI Act delineates specific AI practices that pose unacceptable risks, leading to their prohibition. These prohibitions are essential to safeguard fundamental rights, ensure public safety, and maintain trust in AI technologies.

  • Manipulative or deceptive AI systems: Those that distort behaviour and impair informed decision-making, leading individuals to make decisions they would not have otherwise made, resulting in significant harm. Such manipulative capabilities threaten personal autonomy and can have far-reaching societal implications.
  • Exploitation of vulnerabilities: AI systems that exploit vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm. This measure aims to protect vulnerable populations from AI-driven exploitation, ensuring that technological advancements do not exacerbate existing inequalities or lead to discriminatory outcomes.
  • Social scoring systems: AI systems that evaluate or classify individuals based on their social behaviour or personal characteristics, leading to detrimental or unfavourable treatment that is unjustified or disproportionate. This prohibition reflects concerns about the potential for AI to infringe on individual freedoms and privacy. For instance, China’s social credit system has been criticised for its extensive surveillance and control over citizens, raising alarms about similar systems being implemented elsewhere.
  • Additionally, the Act prohibits AI systems that predict criminal behaviour solely through profiling or personality assessment, excluding human-assisted evaluations based on verifiable facts. This measure addresses the ethical and legal concerns associated with predictive policing, which can lead to biased outcomes and infringe on individuals’ rights.  Occurrences include AI systems that create or expand facial recognition databases through untargeted scraping of images from the internet or CCTV footage. This addresses privacy concerns and the potential for mass surveillance without consent. For example, Clearview AI faced legal challenges in the EU for scraping billions of images from social media without users’ consent leading to significant fines and orders to delete the data. 

AI Risk Categories

AI systems under the EU AI Act are categorised as unacceptable, high-risk, limited, or minimal risk, ensuring proportional governance aligned with their societal impact. The EU AI Act introduces four risk levels for AI systems:

Risk Classifications:

  1. Unacceptable Risk: Certain AI applications, like social scoring by governments or manipulative systems that exploit vulnerabilities, are prohibited outright due to their threat to fundamental rights and public safety. These were discussed in the prior paragraph.
  2. High-Risk: These include applications used in critical sectors, such as healthcare, transportation, and law enforcement. They must comply with strict regulations, including human oversight and risk assessments.
  3. Limited Risk: AI systems with minimal societal risks, such as AI-powered chatbots, require transparency measures. For instance, users must be informed when interacting with non-human agents.
  4. Minimal Risk: Everyday AI applications, like spam filters or productivity tools, face no regulatory requirements but are encouraged to follow best practices.

The EU AI Act employs a tiered, risk-based approach to regulate artificial intelligence, aligning rules and obligations with the level of potential harm posed by AI systems. The regulation applies stricter rules to systems with a higher potential to cause harm. By assessing the severity of risks to safety, rights, and societal impact, the Act ensures that obligations, such as transparency, oversight, and risk mitigation, are proportionate to the harm level, balancing innovation with ethical responsibility.                 

High-Risk AI Systems

The EU AI Act places significant focus on high-risk AI systems, which operate in sensitive domains. Examples include healthcare personalised treatment plans, biometric identification systems, critical infrastructure management, and financial algorithms assessing creditworthiness. 

An AI system is considered high-risk if:

  • It is intended for use in critical areas, such as healthcare, law enforcement, infrastructure, education, or employment, where its operation significantly affects public safety or individuals’ rights.
  • Its use has a significant impact on health, safety, or fundamental rights, assessed based on its purpose and potential to harm or restrict rights in its operational context.
  • The system’s classification is based on its potential to influence sensitive or high-stakes decisions that directly affect individuals or society.

To mitigate potential harm, the EU AI Act imposes additional requirements for high-risk systems:

  • Undergo Conformity Assessments: Providers must demonstrate compliance with safety, transparency, and technical standards before deployment.
  • Implement Human Oversight: Mechanisms are required to ensure human operators can intervene or override decisions when necessary.
  • Conduct Risk Assessments: Regular evaluations identify vulnerabilities, enhancing trustworthiness and functionality.

The EU AI Act regulates high-risk AI systems as defined as those operating in sensitive domains with significant impacts on health, safety, or fundamental rights by requiring conformity assessments, human oversight, and regular risk evaluations to mitigate harm and ensure accountability.

What Are The Compliance Requirements Under the EU AI Act?

Transparency Obligations and Documentation

Transparency is at the core of the EU AI Act, requiring clear disclosures for various AI applications:

  • General-Purpose AI Models: Systems like generative AI must disclose training data sources and methodologies, ensuring their outputs are traceable and explainable.
  • High-Risk AI Systems: Providers must create technical documentation detailing system design, intended use, and potential risks. This information allows regulatory authorities to assess compliance effectively.

Transparency obligations ensure accountability, fostering trust between stakeholders and end-users as well as ensuring AI systems are explainable to regulatory authorities and the public.

Monitoring and Reporting

The EU AI Act mandates continuous monitoring of deployed systems to ensure ongoing compliance. This includes:

  • Post-Market Monitoring: Providers must track system performance, documenting any deviations or emerging risks. Post-market monitoring for AI systems, excluding sensitive law enforcement data, should analyze interactions with other systems and address risks from AI that continue to learn post-deployment, ensuring timely and efficient risk mitigation.
  • Incident Reporting: Providers and deployers are required to report serious incidents, such as system malfunctions or breaches of ethical guidelines, to the relevant authorities promptly.

In the United States, the  U.S. Food and Drug Administration has implemented methods for effective post-market monitoring of AI systems for medical devices to detect input changes and monitor output performance in these devices. This initiative aims to enhance the reliability of AI systems in clinical settings, benefiting both clinicians and patients by providing insights into device accuracy and performance as conditions and patient populations evolve.

Enhanced monitoring practices contribute to greater compliance with regulatory standards and industry guidelines, promoting the responsible deployment of AI across various sectors. This proactive approach not only helps mitigate risks associated with AI but also encourages innovation that prioritizes ethical considerations, ultimately leading to more trustworthy and beneficial AI applications for all stakeholders involved.

Risk Management

AI Risk Management Steps:

  1. Risk Assessments: Identifying vulnerabilities in AI systems before deployment to ensure they meet safety and fairness standards.
  2. Human Oversight: Establishing mechanisms to monitor AI decision-making processes, allowing for human intervention in critical situations.
  3. Safety Protocols: Providers must implement fail-safe mechanisms to address system errors or unintended consequences.

The EU AI Act enforces comprehensive risk management protocols, including assessments, oversight, and safety measures, to mitigate potential harms. Therefore, it’s important to implement a strategic approach to managing AI risk.

Roles and Responsibilities in the AI Lifecycle

Providers and Deployers

A provider refers to any individual or organization that develops an AI system or general-purpose AI model. These can include developers, organizations, or even public authorities that either create AI systems themselves or commission their development. Once these systems are launched under a provider’s name or trademark, whether as a paid service or free offering, the provider becomes accountable for ensuring compliance with the EU AI Act.

A deployer is defined as any individual or entity that operates or uses an AI system within its scope of authority. This definition excludes cases where the AI system is utilized for personal, non-professional purposes, ensuring the Act applies primarily to professional and institutional uses of AI technologies.

Providers and deployers hold key responsibilities under the EU AI Act:

  • Providers: Ensuring compliance with conformity assessments and technical documentation requirements as well as support post-market monitoring.
  • Deployers: Entities utilizing AI systems must implement proper safeguards during operation and report any incidents or irregularities.

Other Stakeholders

The EU AI Act recognizes the roles of various stakeholders, including:

  • Distributors: Must ensure AI systems are labelled correctly and meet compliance requirements.
  • Importers: Verify the conformity of non-EU AI systems with EU regulations before bringing them into the market.
  • Market Surveillance Authorities: Monitor the deployment of AI systems, investigating violations and ensuring corrective actions.

Providers and deployers are jointly responsible for maintaining compliance and ensuring ethical AI usage. Operating AI systems with the ability to modify configurations of the language model or controlling the input prompts are critical elements of the regulation’s definition of responsibility.

Practical Implications for AI Development and Deployment

Data Governance and Privacy

The EU AI Act reinforces the ethical handling of data, aligning closely with the General Data Protection Regulation (GDPR). Specific measures of the Act include:

  • Data Security: Requiring encryption and anonymization techniques to safeguard personal information.
  • Transparency in Data Use: Ensuring clear communication with users about how their data is utilized.

The Act fosters responsible AI practices, protecting both individual privacy and organizational integrity. Data governance under the EU AI Act aligns with GDPR, ensuring secure and ethical AI-driven data handling.

Compliance Costs

EU AI Act introduces comprehensive regulations that present notable compliance challenges and financial implications for start-ups and small and medium-sized enterprises (SMEs), particularly concerning technical documentation and monitoring requirements.

Compliance Challenges:

  1. Technical Documentation: The Act mandates that providers of high-risk AI systems prepare extensive technical documentation to demonstrate compliance with regulatory standards. This documentation must include detailed information on the system’s design, development processes, and risk management strategies. For start-ups and SMEs, compiling such comprehensive documentation can be resource-intensive, requiring specialized expertise and significant time investment.
  2. Post-Market Monitoring: The EU AI Act requires continuous monitoring of AI systems after they have been deployed to ensure ongoing compliance and to identify any emerging risks. Implementing effective post-market surveillance mechanisms can be particularly challenging for smaller enterprises due to limited resources and the need for specialized monitoring tools and processes, according to PwC Germany.

Financial Impact:

  1. Implementation Costs: Adhering to the EU AI Act’s requirements entails significant financial investments. Start-ups and SMEs may face substantial costs related to developing and maintaining the necessary technical documentation, establishing robust monitoring systems, and ensuring compliance with quality management standards. These expenses can be disproportionately burdensome for smaller companies with limited budgets.
  2. Operational Strain: The need to allocate resources towards compliance activities can divert attention from core business operations and innovation. For start-ups and SMEs, this diversion can hinder growth and competitiveness, as they may lack the capacity to manage both regulatory compliance and business development simultaneously.

While the EU AI Act aims to ensure the safe and ethical deployment of AI systems, the associated compliance requirements pose significant challenges and financial strains for start-ups and SMEs. Addressing these challenges necessitates strategic planning, potential investment in compliance infrastructure, and leveraging your own AI solutions to navigate the complex regulatory landscape more efficiently.

What Are The Penalties for Non-Compliance With The EU AI Act?

Role of the AI Office and Competent Authorities

The European Artificial Intelligence Office (AI Office) is a specialized entity established by the European Commission to oversee the implementation and enforcement of the EU AI Act. Its primary responsibilities include:

  • Conducting audits to verify compliance.
  • Investigating reported incidents and taking corrective actions.
  • Issuing fines and sanctions for violations.

The AI Office plays a central role by enforcing the EU AI Act through audits, investigations, and corrective actions by supporting consistent implementation, enforcing rules, fostering the development of trustworthy AI systems, and facilitating international cooperation on AI governance.

Penalties and Sanctions

To deter unethical practices and promote a culture of accountability, non-compliance with the EU AI Act results in severe penalties, including:

  • Fines: Financial penalties of up to €30 million or 6% of annual global turnover.
  • Operational Restrictions: Prohibitions on the deployment of non-compliant systems.
  • Market Exclusion: Permanent bans for providers repeatedly failing to meet regulatory standards.

According to Article 99 of the Act, fines and penalties may vary for startups and SMEs but the rules of compliance are not different. The fines typically take into account the nature, gravity and duration of the infringement and its consequences, taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them.

Future Implications and Evolution of AI

Potential Revisions and Extensions

As AI technologies evolve, the EU AI Act is expected to adapt. Having the flexibility of making revisions ensures the Act remains relevant and effective in a rapidly advancing field. Organizations like the Center for AI and Digital Policy (CAIDP) are continuously offering feedback on the development and refinement of the future of AI compliance by working with the European Commission, European Parliament, and European Council.

The European Union’s AI Act has extended its regulatory framework to encompass open-source AI models, recognizing their growing influence in the global AI landscape. This inclusion ensures that open-source models adhere to standards of safety, transparency, and accountability, aligning with the EU’s commitment to ethical AI development. The risk with open-source deployments is the monitoring of decentralized development. This regulatory approach is particularly pertinent with the release of new open-source AI models, such as China’s release of advanced open-source AI models like DeepSeek-R1, which aim to compete with Western counterparts. 

Other potential future implications include:

  • Deepfake Regulation: Expanding oversight to counter disinformation and misuse of synthetic media.
  • Remote Biometric Identification: Establishing stricter controls to prevent abuses in surveillance technologies.

AuditBoard AI is available to assist with navigating the complexities and changes of AI compliance. For tailored solutions to meet your regulatory needs which includes customized content and policy language to meet your organization’s risk posture as well as intelligent recommendations using the organization’s knowledge base of information to help bridge compliance between different regulations, saving you time and money.

The EU AI Act will evolve to address emerging technologies, such as open-source AI and deepfakes, ensuring comprehensive governance.

Influence on Global AI Governance

The EU AI Act has the potential to shape global AI governance. Its comprehensive framework serves as a model for other jurisdictions, encouraging:

  • Cross-Border Collaboration: Harmonizing international AI standards.
  • Ethical AI Practices: Promoting global accountability and trust in AI systems.

The EU AI Act sets a global benchmark for ethical AI governance, encouraging other nations to adopt similar regulations and highlighting the importance of cross-border cooperation to address the shared challenges of AI development, deployment, and oversight in an interconnected world.

Discover Why AuditBoard Is
Top-Rated by Customers

Schedule a Demo