Audit & Beyond | Gaylord Pacific Resort | October 21-23, 2025 Register Now

Customers
Login
Auditboard's logo

June 23, 2025 8 min read

Demystify AI audits: A practical guide to compliance

diana kelley headshot

Diana Kelley

We often refer to complex machine learning (ML) models as “black boxes.” Not even the data scientists who train these models can explain all of the algorithmic decisions underpinning them. While this lack of visibility is a reality, it doesn’t necessarily prevent you from knowing and auditing the entire ML and AI lifecycle.

InfoSec, compliance, and audit professionals can assess their AI models and the risks they may pose through third-party solutions that provide visibility and audibility to an ML-aware audit program. Moreover, practitioners can often do this using some of the strategies they already employ in their broader audit program.

How AI impacts auditing and compliance

You're used to scanning for vulnerabilities and SLAs if you're in the compliance and audit field. However, some tools used for AI and ML workflows have not traditionally been part of IT vulnerability management. Combining these two sides is critical.

AI innovation has been driven in part through open source repositories of AI and ML models. It is an accepted security practice that any time an organization downloads a file or application from an external source it should be scanned. The same holds true for ML models. Although models are a new kind of file, they are vulnerable to Trojan horse attacks, with malicious operators or code embedded within them. This malicious code can be executed when you deserialize and open the model in your workflow, which can lead to widespread malicious activity within your organization.

AI supply chain management, training users on how to use these systems properly, and scanning model files before they’re used are imperative to securing and auditing AI workflows in the future.

Common compliance use cases for AI

Staying updated and proactive about AI and ML risks shouldn’t deter your organization from exploring the everyday use cases for these technologies within compliance. InfoSec, compliance, and audit teams can use both predictive and generative AI to automate and optimize their processes for the good of their entire organization.

Predictive AI and Compliance

Predictive AI is traditional machine learning. Certain sectors have used this technology for decades to find patterns, generate classifications, and predict outcomes. For compliance, predictive AI lends itself well to three different categories:

  1. Classifications
    1. Process customer reviews to identify issues with product safety
    2. Detect fraud and malware
  2. Trends
    1. Determine if teams are more compliant after training
    2. Assess which users are most likely to fail logins/use insecure passwords
  3. Modeling
    1. Forecast compliance risks based on past signals and activities

Generative AI and compliance

Generative AI (GenAI) produces outputs, such as text, images, and music. In essence, GenAI enables you to create new content based on the corpus of knowledge the system has been trained on. The three general areas in which compliance teams have used genAI thus far include:

  1. Summarization
    1. Populate SIGs from ingested compliance docs (SOC 2, penetration tests) to summarize vendor reviews
  2. Brainstorming/extra “eyes”
    1. Support threat modeling brainstorming
    2. Generate policy templates
  3. Natural language processing (NLP) agents
    1. Create natural language “policy bots” for company use
    2. Customize interactive security awareness training

Assessing AI and ML vulnerabilities

These use cases are likely only the tip of the iceberg when it comes to how compliance teams can leverage AI going forward. Organizations must stay current on this technology to explore both how it can be applied and how it can fail. When looking at risks within the system and what to audit and confirm, focus on the two types of AI failures that can occur: intentional and unintentional.

The first type, intentional failure, happens when an attacker tries to attack the system. Examples of this include:

  • Perturbation and universal perturbation attacks
  • Poisoning attacks
  • Reprogramming neural nets
  • 3D adversarial objects
  • Supply chain attacks
  • Model inversion
  • Membership inference and model stealing
  • Backdoors and existing exploits

The second type, unintentional failure, relates to how the system was trained or operates. Examples include:

  • Reward hacking
  • Side effects
  • Distributional shifts
  • Incomplete testing
  • Over/under-fitting
  • Data bias

Look closely at the governance related to AI use cases that your company uses and ensure you've considered intentional and unintentional risks. In the case of both categories, organizations require people, processes, and technology to combat AL and ML vulnerabilities.

As an InfoSec, compliance, or audit professional, you must:

  • Ensure you’ve trained people
  • Create new policies and examine data management and privacy
  • Integrate your technology for continuous compliance, including new tools, sensors, and awareness controls, so you can continue understanding your security posture

How to start leveraging AI securely

AI is becoming very enmeshed in how we do IT work—and that’s a good thing. You can make AI part of your organization's overall governance by infusing AI policies and procedures into existing software and asset-managed procedures. For instance, you can incorporate specific AI/ML security activities into the DevSecOps infinity loop. These new activities will ultimately lead to compliant risk-managed programs.

Because there are many different ways to deploy AI, how you deploy it will change the risk model and how you consider and view risk. You'll need to build out your checklists and use cases to assess how to use AI securely.

Return on Security has a shared responsibility model open to the public that examines where the responsibility for AI-specific considerations (like AI ethics and safety, bias, and data privacy) lies. Whether using the free version of ChatGPT, the paid version, private SaaS, or hosting it in the cloud, your security and risk model changes based on architectural considerations. Some questions to consider:

  • Does that architecture impact privacy?
  • Will your data be used to train the model?
  • If there's incorrect output, who will be responsible for it?
  • What happens if an employee downloads an open-source model without scanning it first, and it does something malicious?

AI and ML can and should be part of a comprehensive compliance program. As you test and adopt this new technology, do the work upfront to incorporate AI/ML auditing into the rest of your audit realm.

About the authors

diana kelley headshot

Diana Kelley is the Chief Information Security Officer (CISO) for Protect AI. She also serves on the boards of WiCyS, The Executive Women’s Forum (EWF), InfoSec World, CyberFuture Foundation, TechTarget Security Editorial, and DevNet AI/ML. Diana was Cybersecurity Field CTO for Microsoft, Global Executive Security Advisor at IBM Security, GM at Symantec, VP at Burton Group (now Gartner), a Manager at KPMG, CTO and co-founder of SecurityCurve, and Chief vCISO at SaltCybersecurity.

You may also like to read

image of bridge
InfoSec

Cybersecurity GRC for proactive risk and real-time visibility

LEARN MORE
Featured image
InfoSec

GRC automation: What finally works for audit, risk, and compliance

LEARN MORE
Featured image
InfoSec

Security log retention: Best practices and compliance guide

LEARN MORE

Discover why industry leaders choose AuditBoard

SCHEDULE A DEMO
upward trending chart
confident business professional