Closing Internal Audit’s AI Gap Requires Facing Our Challenges Head-On
February 26, 2025

When strategic risks threaten our organizations and profession, internal auditors focus hard on the road ahead. We don’t just stop and shrug — we get to work. So why are most internal auditors still failing to take bold, decisive action to use AI?
The evidence is jarring. A mere 4% of CAEs in AuditBoard’s 2025 Focus on the Future survey report substantial progress implementing AI in any area of internal audit, and only 18% of The Institute of Internal Auditors’ (IIA’s) Vision 2035 survey respondents are using AI within internal audit. At the same time, internal auditors increasingly see a failure to use AI is a key strategic risk. The inability to leverage AI to drive greater internal audit efficiency and productivity ranked #1 in my 2024 year-end survey on the strategic risks facing internal audit.
The only existential threat to the profession is likely to internal auditors who don’t embrace AI, so failure is not an option. It must be a priority despite the acknowledged factors slowing down adoption.

Fortunately, the most common challenges are already clear, and surmounting them primarily requires committing time — not money. As the graphic shows, most 2025 Focus on the Future respondents point to lack of understanding or expertise, data privacy and security concerns, and limited access to quality data. I spoke with Anton Dam, AuditBoard’s VP of Engineering for Data, AI, and ML, to collect actionable guidance on what to do and where to begin in each area.
1. Lack of AI Understanding and Expertise
How can we get smarter about understanding and leveraging AI’s capabilities in internal audit? The universal first step is gauging where your overall organization is on its AI journey. In organizations with broad restrictions on the use of AI, you’ll have to begin by making your case and getting stakeholders onboard. In organizations already investing in AI, the imperative is learning what’s possible and identifying use cases. Either way, before diving in, get clear on organizational policies around AI — another common hurdle — and play by the rules. You can still achieve a great deal working within the guardrails.
Start by experimenting with AI tools already available to you, learning what generative AI can and can’t do well. For example, starting with non-work topics, try ChatGPT or Gemini for research, brainstorming, or writing, or Leonardo AI or Canva’s Magic Media for creating images. For work-related topics, enterprise AI capabilities (e.g., Microsoft Azure or 365, AWS, Google Cloud, Miro) are a good place to start. Enterprise AI is maturing fast, offering writing assistance, data analysis, task automation, and other capabilities. You can also take AI training courses (e.g., The IIA, ISACA).
With your leveled-up AI fluency, you’re ready to explore internal audit implementation. Invest time understanding (1) where/how your team members are spending their time and (2) what problems they’re experiencing. The goal is identifying areas where generative AI can help streamline workflows, expedite problem-solving, and unlock bandwidth — all business cases you can use to overcome concerns about costs vs. benefits, another typical challenge. For example, in teams with frequent turnover, onboarding often absorbs inordinate resources. That’s a great use case for investing in AI tools that help new team members leap the learning curve, such as workflows providing access to policies or help writing risk and issue descriptions. Seek out AI implementation leading practices other internal audit functions have used. Wherever you focus, provide clear guidance and guardrails.
Most AI implementations happen in two phases: efficiencies (e.g., making existing workflows faster) and new capabilities (e.g., powerful data analysis). Which comes first depends on your organization’s needs, goals, and priorities. Both are capacity multipliers enabling internal audit to drive more value from limited resources.

2. Data Privacy and Security Concerns
Internal auditors are right to be concerned about data privacy and security when using AI. Without adequate safeguards, both can be compromised. That’s where understanding AI on a deeper technical level matters. Fortunately, CAEs already learned many of the most important lessons during the transition from on-premise to cloud/SaaS. For example, when using AI tools, it’s critical to understand what data is secure vs. not secure, encryption protocols, access controls (e.g., multi-factor authentication), and whether data residency and transfer practices align with your organization’s legal and regulatory obligations.
Start by familiarizing yourself with the data security/privacy considerations of your existing AI tools, understanding that free and enterprise versions may observe different policies. For example, AuditBoard AI siloes all customer data, and any interactions with customer data follow enterprise-grade data governance guidelines. AuditBoard doesn’t use customer data to train its models, and AI workflows are clearly marked to enable human-in-the-loop review and validation.
When adopting new AI tools, use trusted vendors that provide privacy and security guarantees. That AI startup may promise big results and various guarantees, but ask yourself: Would you trust them with your most sensitive data if they weren’t dangling that shiny AI? Ask all the right questions. Who are their vendors? What data do they use and how? Do they mix data? Standardized security questionnaires and frameworks (e.g., NIST AI Risk Management Framework, NIST Cybersecurity Framework) can help ensure comprehensive approaches.
Lastly, reinforce the necessity of human judgment with clear, thorough guidance regarding human validation — vital for quality assurance and overcoming resistance to AI, another common hurdle. At what junctures do you want or require human judgment or signoff? For example, AuditBoard AI puts practitioners in control at nearly every AI touchpoint, requiring user review/acceptance and providing the ability to edit AI recommendations. But it’s up to you to decide what your team should or must do at each touchpoint.
3. Limited Access to Quality Data
Because AI tools are most effective when they can rely upon large pools of high-quality data, many CAEs worry their data isn’t up to par. The good news is that you probably don’t need as much quality data as you think for your AI tools to deliver value.
While LLMs such as ChatGPT require access to vast, diverse internet-scale data, data volumes are not as important at the application layer. Again, AuditBoard AI is illustrative: While AuditBoard generates and curates large quantities of audit, risk, and compliance data to build its AI application, individual end users can reap the full power of the AI by providing only their own judgment and a few sample “sources of truth” to illustrate the tone, content, and quality of the outputs they seek.
Internal Audit Can Still Change the Narrative
With nearly 50 years in the profession, I’ve witnessed the pattern firsthand: Internal audit has historically been “late to the party” when technology-driven transformation occurs. With desktops and laptops in the ‘80s, internet in the ‘90s, cloud in the early 2000s, and cybersecurity demands in the 2010s, we’ve been apprehensive about the risks and slow to adopt. Indeed, internal auditors are the enterprise beacons who warn others of impending risks, so their risk-averse nature often makes them uncomfortable taking actions that create risk. Being an early adopter of any technology carries risk, and our default is to wait for others to clear the trail. We can’t afford to wait with AI.
Our industries will keep changing, but if you don’t move, life passes you by. If your individual value comes primarily from the processes and tools you use and you don’t take action to embrace and capitalize on the changes happening around you, you’ll be left behind. Fortunately, internal audit can still make responsible AI adoption a new version of the old story. All it takes to get started is committing your time.
Richard Chambers, CIA, CRMA, CFE, CGAP, is the CEO of Richard F. Chambers & Associates, a global advisory firm for internal audit professionals, and also serves as Senior Advisor, Risk and Audit at AuditBoard. Previously, he served for over a decade as the president and CEO of The Institute of Internal Auditors (IIA). Connect with Richard on LinkedIn.