
February 23, 2026 • 8 min read
Boards are struggling with AI oversight. How internal auditors can help

Richard Chambers
Artificial intelligence now sits at the center of strategy. It is starting to shape products, drive decisions, and automate judgment. Boards know this. Yet many admit they are not ready to oversee the risks that come with it.
Recent research confirms the gap. Boards understand that AI matters, but they lack confidence in their knowledge and ability to provide oversight. That gap creates opportunities for internal audit.
Boards know the stakes. They lack clarity.
McKinsey’s recent analysis on board governance of AI paints a sobering picture. Many directors acknowledge AI’s strategic importance. Few believe their boards have the expertise, structures, or information needed to govern it well. Oversight often falls between committees. Responsibility remains unclear. Risk discussions lag adoption.
Deloitte, writing in The Wall Street Journal Risk and Compliance Journal, reached similar conclusions. Their research shows boards struggle to balance innovation with risk. Directors worry about moving too slowly. They worry even more about moving blindly. AI risk discussions tend to focus on high-level principles, not operational reality.
PwC’s Governance Insights Center adds yet another dimension. Boards lack reliable reporting on AI use across the enterprise. Many do not know where AI is deployed, what data it uses, or how models are monitored over time. Without visibility, oversight becomes symbolic.
The message across these reports is consistent when it comes to board member aspirations.
- They want to oversee AI responsibly.
- They lack actionable insight.
- They need trusted internal perspectives.
That is where internal audit comes in.
Audit committees are looking for help.
Audit committees already carry responsibility for risk oversight, controls, compliance, and assurance. AI touches all four.
But AI introduces new risks that warrant board oversight.
- Model bias and ethical failures.
- Data privacy and security breakdowns.
- Regulatory exposure.
- Overreliance on automated decisions.
- Weak governance over third-party AI tools.
AI also introduces opportunities of which boards need to be cognizant.
- Faster insights.
- Improved controls.
- Continuous monitoring.
- Better fraud detection.
Boards cannot realize the opportunities without managing the risks. Most audit committee members know this. They often turn to internal audit as their first line of independent insight.
As internal auditors, we are well-positioned. We understand processes. We understand controls. We understand risks. We also understand how technology changes behavior.
That combination matters.
5 ways internal auditors can support board oversight of AI
As internal auditors, we should not wait for boards to ask for our insights. This is a moment to lead.
Here are five practical ways I believe internal audit can help boards strengthen AI oversight.
1. Map where AI is used across the organization.
Many boards lack a basic inventory of AI use. We can fix that.
Start with a simple question. Where does the organization use AI today?
- Core systems.
- Finance and accounting tools.
- HR screening and evaluation.
- Customer analytics.
- Fraud detection.
- Third-party platforms.
Document the answers. Identify owners. Clarify purpose. Note data sources.
This inventory will give boards visibility. It will create a baseline for governance. It will also reveal shadow AI that was developed without formal oversight.
Boards cannot govern what they cannot see.
2. Assess AI governance design and maturity.
Boards often approve AI principles. Few know whether those principles work in practice.
Internal audit can assess governance design.
- Who owns AI risk?
- How AI decisions get approved.
- How ethics and bias are addressed.
- How accountability is enforced.
- How third-party AI is governed.
Compare current practices to leading frameworks referenced by McKinsey, Deloitte, and PwC. Highlight gaps. Highlight strengths.
This assessment shifts the conversation from aspiration to execution.
Boards will value that shift.
3. Evaluate controls over data, models, and outputs.
AI risk does not start with algorithms. It starts with data.
Internal audit should evaluate controls across the AI lifecycle.
- Data quality and integrity.
- Access controls over training data.
- Model development standards.
- Change management.
- Output validation.
- Monitoring for drift and bias.
This work fits squarely within our traditional skill sets. It also addresses the concerns directors raise most often. They want assurance that AI outputs are reliable, explainable, and monitored.
This is familiar territory for internal auditors. The tools may be new. The discipline is not.
4. Translate AI risk into business and regulatory terms.
Boards struggle with technical language. We can bridge the gap.
Translate AI risk into terms directors understand.
- Financial misstatement risk
- Compliance exposure
- Reputational impact
- Operational disruption
- Strategic failure
Tie AI risks to enterprise risk statements. Align them with regulatory expectations that continue to evolve globally.
When boards see AI risk in business terms, oversight will improve. Decision-making should improve. Confidence will improve.
5. Provide continuous insight, not one-time assurance.
AI evolves quickly. Annual reviews are not enough.
As I have been arguing for years, we must rethink our approach to incorporate.
- Continuous risk monitoring.
- Periodic governance check-ins.
- Targeted reviews of high-risk use cases.
- Ongoing reporting to audit committees.
This aligns with what boards want. It also aligns with how AI behaves.
Static oversight fails in dynamic environments.
This is a defining moment. Is internal audit posed to fill the AI oversight gap?
Boards are not failing at AI oversight due to lack of interest. They are struggling due to lack of clarity, information, and practical guidance.
We can fill that gap by bringing independence, an enterprise-wide perspective, and our hard-earned credibility with our audit committee.
We must not wait to be asked. We should raise the issues, offer a roadmap that begins with visibility and builds toward assurance. We must remain engaged as AI evolves.
Boards need help. Audit committees are listening. This is our moment to step forward and lead.
AI will reshape organizations. It will also reshape expectations of internal audit.
Be the resource your board needs.
About the authors

Richard Chambers, CIA, CRMA, CFE, CGAP, is the CEO of Richard F. Chambers & Associates, a global advisory firm for internal audit professionals, and also serves as Senior Advisor, Risk and Audit at AuditBoard. Previously, he served for over a decade as the president and CEO of The Institute of Internal Auditors (IIA). Connect with Richard on LinkedIn.
You may also like to read


Audit reporting best practices: Guide for audit leaders

Latest data on AI adoption reinforces need for internal auditors’ “superpowers”

AuditBoard and IAF report: The more you know about AI-enabled fraud, the better equipped you are to fight it

Audit reporting best practices: Guide for audit leaders

Latest data on AI adoption reinforces need for internal auditors’ “superpowers”
Discover why industry leaders choose AuditBoard
SCHEDULE A DEMO



