
August 7, 2025 • 9 min read
New report on AI governance: A ‘call to action’ for internal auditors

Richard Chambers
As someone who has observed the evolution of internal audit over several decades, I can say with certainty: few emerging risks have captured the profession’s attention like artificial intelligence (AI). But awareness alone is not enough. If we are to remain relevant and impactful in the years ahead, we must move swiftly from watching the rise of AI to actively providing assurance on its governance.
A new research report commissioned by AuditBoard, “From blueprint to reality: Execute effective AI governance in a volatile landscape,” offers both a wake-up call and a roadmap. The findings should prompt every internal auditor to ask: Am I doing enough to provide assurance on one of the most consequential risks of our time?
AI governance is no longer optional
According to the report, 82% of organizations have deployed AI tools across key functions. From predictive models to generative systems, AI is transforming business operations at a breathtaking pace. At the same time, 86% of respondents said their organizations are aware of AI regulations—ranging from the EU AI Act to the U.S. NIST AI Risk Management Framework.
But here’s the disconnect: while most organizations know governance is necessary, very few are executing on it. Only 25% of respondents said their AI governance programs are fully implemented. That means three out of four organizations are flying into an AI-driven future without a clear flight plan—or a co-pilot.
As internal auditors, we must recognize this for what it is: an urgent opportunity to serve our organizations.
The policy-practice gap: Where governance falls apart
The AuditBoard report spotlights a troubling trend I’ve seen far too many times: the gap between drafting policy and executing it. Many organizations are drafting responsible AI use statements and risk principles. That’s commendable—but insufficient.
Policies alone don’t manage risks. Embedding those policies into business workflows, decision-making routines, and approval processes is where true governance lives. Internal audit must assess not only whether an AI governance policy exists, but whether it's been translated into action.
It’s not enough to ask, "Do we have a policy?" The better question is, "Can we demonstrate that it works?"
Why internal auditors must step up
AI governance is more than a compliance exercise. It's a trust imperative. As internal auditors, we are uniquely positioned to evaluate whether the organization’s AI use aligns with its values, objectives, and obligations. Our independence and core competencies—risk assessment, controls evaluation, critical thinking, and professional skepticism—are exactly what’s needed to navigate this complex terrain.
We must begin asking critical questions about our organizations:
- Is there a current inventory of AI systems in use?
- Who owns governance over AI use across departments?
- Are third-party models evaluated and documented?
- Is "shadow AI" (unauthorized employee use of AI tools) monitored and managed?
- Do governance controls account for bias, transparency, and explainability?
In the past, we’ve followed the financial risks, the operational risks, and the cyber risks. Now, we must follow the algorithmic risks.
Governance is a cultural—and structural—challenge
One of the most important insights from the AuditBoard report is that the barriers to effective AI governance are not technological. They are cultural and structural.
Respondents cited three primary obstacles:
- A lack of clear ownership
- Insufficient internal expertise
- Limited resources
Only 15% cited lack of tools as the primary issue. This echoes a lesson I’ve seen play out time and again: tools don’t solve problems—people and processes do.
Internal auditors must evaluate how AI governance responsibilities are distributed and whether accountability mechanisms exist. Too often, we see what the report calls “distributed responsibility without distributed accountability.” That’s not governance—it’s organizational risk in disguise.
False confidence and the illusion of visibility
Here’s another insight from the research that should alarm us: while 90% of organizations say they have visibility into their use of AI—including shadow and third-party systems—only 67% conduct formal risk assessments.
That’s a classic case of overconfidence masking exposure.
As internal auditors, we know that assumptions are not evidence. Confidence without controls is a liability. We must challenge the narrative that “we have this under control” by asking for documentation, testing controls, and verifying oversight. Because when it comes to AI, what we don’t know can absolutely hurt us—and our organization.
Recommendations for internal auditors
The AuditBoard report closes with five strategic actions to help organizations move from policy to practice. These same recommendations offer a blueprint for internal auditors seeking to add value in this evolving space:
- Translate Policy Into Practice: Provide assurance on whether AI governance policies are embedded in actual business operations—not just sitting in a policy portal.
- Build Cross-Functional Governance: Assess the effectiveness of AI governance councils or committees. Are they inclusive of risk, legal, compliance, and technical leaders?
- Automate with Purpose: Don’t be impressed by shiny dashboards alone. Confirm that automation efforts rest on solid foundational controls.
- Evaluate Communication and Training: Is there consistent messaging about responsible AI use across the enterprise? Do our organizations’ employees know the risks?
- Promote Agility and Responsiveness: Governance must evolve as AI evolves. Encourage real-time updates, continuous monitoring, and adaptive controls.
As internal auditors, we should be auditing these very elements—not just during annual reviews, but in real time.
Internal audit: From observer to enabler
Let’s be clear: as internal auditors, we don’t need to be data scientists. But we must understand where AI is used, what risks it introduces, and how it’s governed.
Our value lies not in technical expertise, but in asking the right questions, providing independent assurance, and promoting a culture of accountability. We don’t need to build the models—we need to make sure they’re built, deployed, and monitored responsibly.
The organizations that will thrive in the AI era are not the ones who adopt the fastest. They’re the ones who govern the best. Internal audit has a pivotal role to play in that governance.
The time to lead is now
AuditBoard’s research should serve as a clarion call for the internal audit profession. AI is already embedded in core business functions. Governance is lagging. The risks are growing.
This is not a time for internal auditors to stand on the sidelines. It’s a time to engage boldly, to lean into our advisory role, and to help organizations embed governance into the DNA of their AI strategies.
The most critical assurance we may be asked to provide in the future will be on the effectiveness of AI governance. That future is closer than we think—and the imperative to act is now.
About the authors

Richard Chambers, CIA, CRMA, CFE, CGAP, is the CEO of Richard F. Chambers & Associates, a global advisory firm for internal audit professionals, and also serves as Senior Advisor, Risk and Audit at AuditBoard. Previously, he served for over a decade as the president and CEO of The Institute of Internal Auditors (IIA). Connect with Richard on LinkedIn.
You may also like to read


Continuous risk monitoring: Principles, capabilities, and more

The IIA's third-party topical requirement: A mandatory shift in audit accountability

Internal controls to prevent fraud: A practical guide

Continuous risk monitoring: Principles, capabilities, and more

The IIA's third-party topical requirement: A mandatory shift in audit accountability
Discover why industry leaders choose AuditBoard
SCHEDULE A DEMO
