
October 1, 2025 • 6 min read
How AI provides essential infrastructure for auditors

David Hill
Imagine building a skyscraper without foundations—possible for a few floors, but dangerously unstable as you go higher. That is the reality for audit functions approaching artificial intelligence (AI) as a side project or a trend to watch. “AI is no longer a novelty; used well, it is an infrastructure layer for visibility, speed, and foresight. Used poorly, or ignored altogether, it introduces blind spots, bias, data leakage, and misplaced confidence”. (Ernst Young)
Internal audit should treat AI as an essential capability that strengthens risk identification, sharpens control design, and elevates assurance, while putting robust governance around the specific risks AI creates.
Strategic risk framed by noise and hype
Two false choices slow progress: ‘AI will replace auditors’ versus ‘AI is overhyped.’ Both distract from the strategic question: how will we maintain visibility and relevance in a world where risks emerge faster than manual methods can detect? Treating AI as optional tooling invites drift, reporting lags, reactive assurance, and diluted influence with those charged with governance.
Capabilities we cannot afford to ignore
Artificial intelligence is reshaping the internal audit landscape by enabling a more proactive, connected, and dynamic approach to assurance. Through continuous scanning of transactions, logs, and communications, AI provides early warning signals by surfacing anomalies and trend shifts far earlier than traditional periodic testing. Its predictive capabilities go beyond retrospective analysis, offering forward-looking insights into potential fraud exposure, compliance slippage, and operational strain, empowering management to intervene before risks materialise.
By linking data across processes, business units, and third parties, AI reveals systemic control gaps and cross-cutting themes that siloed reviews often overlook, creating a more connected risk picture. Dynamic controls powered by AI can adjust thresholds in real-time, automatically flag exceptions, and deliver near-real-time dashboards to executives and audit committees, enhancing transparency and responsiveness.
Moreover, when monitored indicators spike, such as in procurement or access management, AI enables strategic recalibration by updating risk scores and audit plans swiftly, ensuring timely intervention and smarter allocation of audit resources.
AI-specific risks—and internal audit’s role
AI also introduces material risks:
- bias and fairness concerns,
- explainability gaps,
- model drift,
- data privacy and security exposures,
- over-reliance on outputs without human judgement.
Internal audit’s role is to help establish and evaluate governance that keeps AI ethical, secure, and accountable, covering policy, design controls, testing, monitoring, documentation, and incident response.
This aligns with modern standards' expectations that the Chief Audit Executive ensures the function is appropriately resourced and enabled by technology and that technology limitations and risks are transparently communicated to those charged with governance.
Guarding credibility and influence
Credibility follows capability. Teams that cannot provide timely, data-led insight risk being seen as backward-looking. Conversely, audit functions that adopt AI with clear guardrails enhance their position as trusted advisers, able to quantify exposure, spot patterns early, and recommend targeted action.
AI is now part of the assurance infrastructure. Without it, audit risks mistaking noise for signal; with it, we gain foresight without losing judgement.
Finding a way forward
Approach AI adoption as a risk-based programme, not a single tool purchase. Start where risk and data overlap, prove value quickly, and expand with control. Practical enablers include:
- A charter and policy that define acceptable use, roles, and human-in-the-loop requirements.
- A model inventory with ownership, purpose, data lineage, and change controls.
- Design-time controls: privacy-by-design, permissioning, segregation of duties, and prompt/input safeguards.
- Testing and monitoring: performance, drift, bias, and robustness checks with thresholds and escalation paths.
- Documentation and traceability for key decisions, training data, and material outputs.
- Metrics and indicators (KPIs/KRIs) that track value, timeliness, exceptions, and residual risk.
- Targeted pilots in high-value areas (e.g., procurement analytics, access management, continuous control testing) with clear exit criteria.
Call to action: By the end of the decade, the most effective audit functions will combine human expertise with AI-enabled analysis and real-time monitoring. Workflows will be leaner, insight more predictive, and reporting more relevant, without compromising independence or quality.
Treat AI as essential audit infrastructure. Protect the organisation from both the risks AI can create and the risks you will miss without it. Start small, start safely—but start now.
About the authors

David Hill is the former CEO of SWAP Internal Audit Services based in the UK. David has nearly 40 years of audit experience, and is a former member of the Global Guidance Committee. Connect with David on LinkedIn.
You may also like to read


From the abacus to AI: My journey in internal audit technology

How Bupa drives alignment across the three lines of defense

Rising risks, shifting priorities: What the IIA’s Risk in Focus 2026 report means for internal audit

From the abacus to AI: My journey in internal audit technology

How Bupa drives alignment across the three lines of defense
Discover why industry leaders choose AuditBoard
SCHEDULE A DEMO
