Audit & Beyond | Gaylord Pacific Resort | October 21-23, 2025 Register Now

Customers
Login
Auditboard's logo

June 17, 2025 9 min read

Beyond ChatGPT: How agentic AI is poised to revolutionize internal audit operations

Daniil Karp avatar

Daniil Karp

An AuditBoard flash poll of 3,087 audit professionals in January 2025 revealed that 1 in 3 auditors named “the inability to leverage AI to drive greater internal audit efficiency and productivity” the most significant strategic risk facing their internal audit function over the next five years.

As internal auditors face growing pressure to use artificial intelligence to create efficiencies in their audit and SOX programs, AI education has become a top priority for internal audit teams. In fact, 32% of auditors in the same poll named “improving and leveraging AI literacy” their highest priority for 2025.

Enter agentic AI, a branch of AI that can autonomously perform tasks and make decisions to achieve specific goals, rather than simply responding to individual prompts. For auditors, AI agents represent a more promising evolution of artificial intelligence because, with proper oversight, these systems can dramatically reduce manual workloads by automating processes throughout the entire audit lifecycle. It follows that a separate AuditBoard flash poll of 2,574 auditors in January 2025 found 64% of audit teams were exploring or considering AI agent adoption in the next 12 months.

In this article, we will examine the potential for agentic AI in transforming internal audit programs, discuss its associated risks, and dive into the importance of establishing proper governance and guardrails around its usage.

Benefits of agentic AI and use cases for internal audit

A January 2025 webinar hosted by AuditBoard and Greenskies Analytics, Up Next for Internal Audit: AI Agents, found that 50% of attendees identified controls testing and fieldwork as the best use of AI agents for internal audit. Some examples of the work multiple AI agents can perform in conjunction to automate a controls test include:

  1. Retrieving data, then scanning and cleaning it for mismatches and duplicates
  2. Reviewing initial findings, identifying anomalies, and assessing the risk level
  3. Compiling a draft audit report based on the test performed
Example AI agent workflow in internal audit

As evidenced above, an agentic AI system is powerful because not only can its AI agents execute complex audit tasks, but they can also make contextual decisions and adapt their processes in real-time.

Risks and importance of strong governance

While excavating the transformative potential of agentic AI is an exciting goal, audit teams must give equal consideration to the challenges of responsible adoption. As a system designed for autonomous goal-directed behavior, the agentic AI model introduces new questions about data security, oversight, and output validation.

As such, the following are industry-recognized resources to help design internal governance and due diligence processes around agentic AI usage:

The EU AI Act: The EU AI Act applies to all parties involved in the development, distribution, and usage of AI. Its requirements include a model inventory, risk classification, and risk assessment, and are aimed at protecting user safety and fundamental rights. Penalties for non-compliance range from 1% to 7% of the business’s annual turnover. Because the EU typically leads in setting global data privacy and policy standards (like GDPR), it is reasonable to expect that the EU AI Act may influence U.S. AI regulations in the future.

NIST AI Framework: In obligation to the National Artificial Intelligence Initiative Act, The National Institute of Standards and Technology (NIST) published the AI Risk Management Framework (AI RMF) in 2023 to help guide — through the four functions of Govern, Map, Measure, and Manage — the responsible development and deployment of AI systems. Since then, several companion publications have been published that offer further guidance, including:

ISO/IEC 42001: Published in 2023, ISO 42001 is the world’s first standard on AI management systems that addresses the unique challenges of AI, including ethics, transparency, and continuous learning. Per ISO, the standard “sets out a structured way to manage risks and opportunities associated with AI, balancing innovation with governance.”

Agentic AI vendor selection

As GRC and audit management vendors increasingly embed AI into their platforms, these integrated solutions may offer a more secure approach to agentic AI adoption, benefiting from the rigorous security standards these vendors maintain to stay competitive in the enterprise market.

Moreover, when integrated with an existing audit management solution that houses internal controls and risk data, agentic AI could act as a proactive, intelligent layer that enhances — not replaces — the system’s functionality. An audit management or GRC solution with agentic AI functionality can help audit teams stay ahead of risk, reduce manual work, and focus on higher-value analysis and decision-making.

Selecting an AI or AI-powered vendor that supports your internal audit initiatives while also aligning with your organization’s security policies can be tricky. The following is a checklist of best practices to consider when vetting potential AI vendors:

  • Vendor due diligence on data privacy and security (to evaluate data handling, security, and development practices)
  • Vendor due diligence on solution development (to evaluate how the vendor designs its solutions as the relationship between the vendor and the provider of the model)
  • Vendor due diligence on data usage & rights (to evaluate impact of inclusion in training and the ownership of inputs and outputs)
  • Solution/model due diligence (to evaluate specifics of the model, risk of hallucination, safeguards designed or required, if outputs are explicitly accepted by the end user)
  • Propose mitigations by enabling safe AI options
  • Train users on appropriate AI usage
  • Familiarity with existing AI frameworks and standards such as ISO/IEC 42001 and NIST AI RMF.

Building sustainable AI-powered audit functions

While agentic AI offers significant potential to optimize audit processes and drive operational efficiencies, auditors must carefully balance these benefits against inherent risks. Success requires thoughtful implementation with robust governance frameworks, appropriate oversight mechanisms, and a clear understanding of both the technology's capabilities and limitations. Organizations that take a measured, risk-aware approach to agentic AI adoption will be best positioned to harness its transformative potential while maintaining audit quality and stakeholder trust.

About the authors

Daniil Karp avatar

Daniil Karp is a SaaS business professional with over a decade helping
organizations bring revolutionary new practices and technologies into
the fields of IT security and Compliance, HR/recruiting, and collaborative
work management. Prior to joining AuditBoard Daniil worked in go-to-market at companies including Asana and 6sense.

You may also like to read

Fraud triangle: A practical guide for internal audit teams
Internal Audit

Fraud triangle: A practical guide for internal audit teams

LEARN MORE
How the audit committee can be an advocate for internal audit
Internal Audit

How the audit committee can be an advocate for internal audit

LEARN MORE
Celebrating internal audit: Stories of value and growth
Internal Audit

Celebrating internal audit: Stories of value and growth

LEARN MORE

Discover why industry leaders choose AuditBoard

SCHEDULE A DEMO
upward trending chart
confident business professional