Oct 29 | Your IPO readiness roadmap: Navigating SOX compliance. Register Now

Customers
Login
Auditboard's logo

October 27, 2025 8 min read

AI governance and the future of GRC

Daniil Karp avatar

Daniil Karp

AI is everywhere, and its adoption is outpacing organizations' ability to govern this new technology.

Within three years of OpenAI's launch of their first model, we're seeing the proliferation of generative AI into commercial and business tools. A new poll from the Associated Press found that 60% of Americans—and 74% of those under 30—use AI to find information, and roughly 4 in 10 say they use AI for work tasks at least sometimes. Whether you think this adoption curve is slow or fast, IT security, IT risk, and audit and assurance professionals are responsible for establishing the proper guardrails to utilize this innovative technology effectively.

By working together collaboratively, these teams can determine what proper ownership and governance look like across the company.

A quick AI primer

To understand GenAI, you must first understand how the technology builds from each of the main AI subcategories. Here is a quick definition of key terms:

  • AI refers to the theory and methods used to build machines that can execute tasks previously thought to be only performed by humans.
  • Machine learning trains computers on large amounts of data to make predictions and classifications without human programming.
  • Deep learning mimics human intelligence by using artificial neural networks to teach computers to perform complex tasks.
  • GenAI generates new text, audio, images, video, or code based on content it has been pretrained on.
  • Agentic, a newer AI system, that can independently set goals, plan, make decisions, and perform complex tasks with less human prompting and intervention..

Recognizing AI in today’s enterprise ecosystem

AI is here to stay—and it's already in your organization in at least three ways:

  1. By design, with official permission to build in-house AI tools or purchase off-the-shelf AI solutions guided by your internal AI, security, operational, and compliance policies.
  2. Through 3rd-party vendors and Nth-party suppliers who use AI in their products or in their own tech stack.
  3. Via shadow AI, in which employees use public, non-company-sanctioned AI tools to assist with work-related tasks.

Governance and risk professionals have a responsibility to create official and efficient channels for technology and use-case review that empower end users to procure and leverage AI in ways that adhere to enterprise standards, while also enabling them to expand the scope and capabilities of their departments.

Ownership, accountability, and the role of governance stakeholders

Effective AI governance is a collaborative effort. A management-level committee can bring together everyone involved in developing and executing AI across the organization.

The board sets the tone from the top, emphasizing that AI is a crucial area requiring a strategy. Management's role is to define and execute the plan, monitor related risks, and work closely with designated risk leaders and legal teams to ensure compliance with existing and new AI regulations. The finance team defines KPIs tracking where AI can make the biggest impact and drive the greatest efficiencies with the help and coordination of the IT department.

From an IT risk perspective, the goal is to operationalize AI safely and securely, with transparency capabilities that enable understanding of the processes and tools in use, as well as checks for both their inputs and outputs. From an audit perspective, this approach enables technicians to demonstrate how to achieve security and comfort in these new processes, and also identify within AI models where human intervention is necessary.

Ethical AI use and acceptable use guidelines

If you haven’t started working on your AI governance policies, it’s OK—but don’t waste any more time. What all organizations should avoid is creating a single AI policy to govern them all.

For example, you might have a secure development lifecycle or engineering policy that outlines how code is written, the frameworks for secure coding, how to test code, and who needs to approve it before it's merged into the main codebase. Review every policy you have and integrate AI governance into those policies where you can.

Building governance on firm foundations

Governance, risk, and compliance teams don’t have to build AI governance from scratch. They can review current policies and frameworks to identify areas of overlap and then repurpose them for AI. Here are a few frameworks to use as a starting point:

  • The NIST AI Risk Management Framework comprises four key components: govern, map, measure, and manage.
  • ISO/IEC 42001 is the world's first standard for AI management systems, guiding the trustworthy, responsible, and ethical adoption of AI.
  • ISACA’s Artificial Intelligence Audit Toolkit is a control library designed to assist auditors in verifying that AI systems meet the highest standards and ethical responsibility.
  • OWASP GenAI Security Project is a global community-driven and expert-led initiative to create freely available open source guidance and resources for understanding and mitigating security and safety concerns for GenAI applications and adoption.
  • The Cloud Security Alliance’s AI Safety Initiative aims to establish trusted best practices, accelerate the responsible adoption of AI, complement AI assurance programs, and address critical ethical issues and their impact on society.
  • MITRE ATLASTM (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible, living knowledge base of adversary tactics and techniques against AI-enabled systems based on real-world attack observations and realistic demonstrations from AI red teams and security groups.
  • The Deloitte AI Governance Roadmap is a recently developed framework that provides an end-to-end view of corporate governance, defining and delineating board and management activities across seven key areas.

Proper governance enables businesses to adopt and utilize emerging technologies, such as AI, in a manner that minimizes excessive risk. Organizations that lean in, start governing, and understand how people use AI today stand to gain the most benefit from it.

About the authors

Daniil Karp avatar

Daniil Karp is a SaaS business professional with over a decade helping
organizations bring revolutionary new practices and technologies into
the fields of IT security and Compliance, HR/recruiting, and collaborative
work management. Prior to joining AuditBoard Daniil worked in go-to-market at companies including Asana and 6sense.

You may also like to read

featured image
InfoSec

The GDPR compliance framework: What you need to know in 2025

LEARN MORE
featured image
InfoSec

How to successfully prepare for security and compliance certifications

LEARN MORE
Featured image
InfoSec

5 prerequisites to AI-augmented risk management

LEARN MORE

Discover why industry leaders choose AuditBoard

SCHEDULE A DEMO
upward trending chart
confident business professional