AI, IA, and R U OK? How Internal Auditing Can Help Organisations Get Ready for the EU AI Act
The European Union has a reputation for issuing landmark legislation on fast-moving and often difficult issues that other jurisdictions quickly emulate. Think data protection (GDPR), corporate sustainability reporting (CSRD), and crypto markets (MiCA). And now, once again, the EU is showing its global leadership by introducing the Artificial Intelligence Act.
In the UK, the government’s AI white paper expresses a desire for an agile and innovation-friendly response to AI, but Parliament is sure to be looking at the EU legislation very carefully. Regardless of what happens at home, organisations wishing to place their AI systems on the EU market will be expected to comply with the AI Act. Recently, the EY organization collaborated with AuditBoard for a panel discussion on this topic.
According to Hasan Ali at EY, a balanced approach to AI is critical. “Risk professionals whose roles revolve around risk assessment and risk management are uniquely equipped to partner with technical experts to demystify and harness the potential of AI within our organisation.”
To better understand the impact of AI on the work of risk management and assurance (and vice versa), EY prioritizes four key questions:
- What sets artificial intelligence apart from other emerging risks?
- What’s driving legislation?
- How will the EU Act impact my organisation?
- What can risk and assurance professionals do to help?
What sets artificial intelligence apart from other emerging risks?
According to a recent EY study, 65% of CEOs see AI as a force for good and 45% are planning significant investments in it. While nearly two-thirds anticipate a negative impact on jobs, there is a widespread view that this will be counterbalanced by the creation of new roles.
What is distinctive about AI is its ability to adapt coupled with its autonomy to make decisions. While this can bring many benefits, it also raises the potential for AI to get it wrong. Legal cases have found organisations liable for the views expressed, advice given, and decisions made or assisted by AI – their customer service chatbots and candidate selection processes, for example. And when we attach AI to machinery and vehicles, the potential for physical danger becomes very real.
What’s driving legislation?
AI risks can arise from its design and performance. ‘Hallucinations’ can occur when AI creates false or misleading outputs or systematically biased decisions from seemingly neutral algorithms. Most experts readily acknowledge the need for some form of control. AI can amplify cybersecurity, privacy, and third-party risks and introduce new and unexpected exposures with significant ethical, operational, financial, and reputational consequences.
There are also fundamental issues at stake for public security, data protection, fair treatment, and access. It has all the classic characteristics of emerging risk — novelty, volatility, velocity, uncertainty, and the potential for major disruption. Some of the gravest dystopian projections for a world with AI sound like science fiction, but at this stage, we simply don’t know how it will evolve and what the long-term repercussions will be.
What’s in the EU AI Act?
The EU legislation is aimed at ensuring better transparency and accountability in the form of responsible AI. Its purpose is “to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI” while boosting innovation. AI applications with the highest level of risk — such as ‘real-time’ remote biometric classification, social scoring, and predictive policing — are forbidden and transgressors may face heavy fines of up to 7% of global turnover. Other high-risk applications — like those used in law enforcement, critical infrastructure, vocational training, or evaluating people’s access to essential services (e.g. creditworthiness or access to public healthcare) — require the strict governance to ensure effective oversight, control, documentation, and transparency. Prohibitions will take effect from February 2025, while responsibilities around general-purpose AI models will apply from August 2025. Obligations around most high-risk AI systems will apply from August 2026 (while high-risk systems that are part of a safety component of a product covered by separate EU legislation, e.g. aircraft, medical devices etc. will have to comply by August 2027).
How Can We Help?
The purpose of the legislation is the protection of the public. The role of risk management and internal auditing is to help create and protect organisational value through insight and advice. We are not here to stop the use of AI but enable its responsible and successful application.
EY emphasises that there are plenty of important contributions internal auditors can make in supporting management and the board:
- Start by agreeing on a definition of AI. Until we know it and recognise it, we cannot manage it. The AI Act defines it like this:
A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
- Coordinate or lead a mapping of AI across the organisation, including use by third-party vendors. It is quite possible AI has already been adopted without everyone realising it.
- Advise senior management on appropriate roles, responsibilities, structures, and resources.
- Support the development of organisational policies, procedures, standards, and guidelines.
- Review compliance with current and forthcoming requirements and carry out a gap analysis.
- Conduct testing and analysis.
- Provide assurance on inputs (data), processes, outputs, and outcomes.
Our top tips are:
- Take the initiative – don’t wait to be asked.
- Be curious and ask questions.
- Advocate for an organisation-wide AI strategy with continuous monitoring and lifecycle controls.
- Ensure that risk management and internal auditing evolve with the increasing use of AI.
- Learn what you can but bring in experts as required.
Science fiction writer Isaac Asimov wrote extensively about an imagined world of intelligent machines regulated by principles hard-wired into their programming – the so-called laws of robotics – the most fundamental of which was this: “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
Until we can be certain of the risks arising from AI, we must play our part by fostering good governance and nurturing natural intelligence among decision-makers.
The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.
Hasan is the Founder and Leader of the EY Risk Innovation Hub; a national initiative to assist EY clients to leverage new and existing technology to provide efficiency, new insights, and true digital transformation. He has been instrumental in developing several of the organization’s technology assets and embedding them across a range of FTSE 250 and global clients.