AI-Powered Audit, Risk, and Compliance: Keys to Implementing the Latest Technology

Anton Dam

March 3, 2025

AI-Powered Audit, Risk, and Compliance: Keys to Implementing the Latest Technology

Generative AI is revolutionizing both society and the workplace. When rolled out correctly, it can help streamline, accelerate, and automate core functions like risk detection, audit reporting, and evidence collection so that professionals in these roles can focus on the higher-level tasks that leverage their industry education and experience. However, these AI practitioners are also the professionals tasked with understanding the technology’s risks.

We recently spoke with enterprise audit and security leaders about their best practices for implementing AI and their strategies for addressing security and data privacy risks associated with this technology.

Create A Cross-Functional Committee 

The best place to start your business’s AI journey is with an internal committee that includes governance, risk management, compliance, security, and legal operations. 

The audit team plays a crucial role in this group, evaluating every piece of AI technology with a risk assessment. Once they generate a risk score, the auditors can convene with the rest of the committee to decide if they want to greenlight the technology. The group’s mission is to educate the rest of the business on AI, including the C suite, and foster a collaborative environment. Creating a platform for the company to ask questions about AI, such as an email alias that goes to the committee, further promotes this inclusive culture. 

Promote A Culture of Transparency 

Lean into what your company is doing to determine how AI can help the business run better. Getting this right necessitates transparency across the organization to understand how different teams might already use AI today. The goal is to set parameters around this usage via the committee and measure and set goals against it. 

For instance, does your engineering team fully understand the transparency and context of what you’re trying to achieve as an audit, risk, or compliance professional? 

“I think we have the opportunity with AI to be more transparent and tie those things from these end-to-end, massive, entity-level controls, all the way down to the choices that individual operators are making on a daily basis,” said Terry O’Daniel, a CISO and strategic advisor. “So, I think by embracing AI, we have the opportunity to give guidance in the form of test suites, test cases, and desired outcomes.”

Test A Lot of Use Cases, and KPI the Ones That Win

Approved use cases for GenAI must tie back to achieving overall business goals and avoiding business risks. But this doesn’t mean to limit yourself. Ask a good cross-section of the organization which tasks GenAI could apply to and why.

“You throw it all down, you brainstorm, you risk-rank, you figure out those things, and then you can figure out where to go from there,” said Melissa Pici, Senior IT Audit Manager at Syniverse. “What are the things we can implement quickly? What are the things that are going to take a little bit more work? And what are the things that are going to need a project?” 

When you tie use cases to your business objectives, things will start running more efficiently—not for the sake of compliance or audit, but because you’re looking at the right goals and KPIs that drive value for the entire company.

Create Guidelines Internally 

There are currently no universal standards for GenAI set forth by industry bodies. So, audit, risk, and compliance teams’ best bet is to create guidelines for their internal customers once they understand how they use the technology. This could involve using the corporate logged-in version of an AI system or the API to protect employees and corporate data and avoid artificial boundaries that everyone will just route around. 

Keep Human Judgment in the Loop

Human insight is still required to analyze the output of GenAI systems. Just like you wouldn’t send out a company-wide email you used ChatGPT to write without checking its spelling and grammar, you can’t just present AI’s findings to the C suite or board of directors as is. Part of the job of audit, risk, and compliance professionals is to help the business decide where to allocate budget and resources. AI can provide lots of data, but data is not intelligence and intelligence is not insights. So, don’t lose sight of the need for human intelligence to check the validity of and make sense of AI-generated materials.

Make Sure to Skill Up

GenAI’s infusion into many aspects of business means everyone must upgrade their skills. For instance, those not in technology roles can take data science courses to understand how data structures work. Think back to the Industrial Age when people went from doing the jobs to the machines doing them. It’s similar to AI—you learn to work on the machines instead of doing the job. Lifelong learning will become even more critical as we mature in this new AI Age. 

Anton Dam

Anton Dam is the VP of Engineering for Data, AI/ML at AuditBoard. In his role, Anton is responsible for the development and deployment of artificial intelligence and machine learning technologies to enhance audit, risk, and compliance workflows. His experience includes developing enterprise AI products at LinkedIn and Workday, as well as at startups such as Restless Bandit and Skupos.

Read More From Anton Dam

Discover Why AuditBoard Is
Top-Rated by Customers

Schedule a Demo