Audit & Beyond | The Gaylord Pacific Resort | October 21-23, 2025 Register Now

Customers
Login
Auditboard's logo

July 30, 2025 9 min read

Benchmarking AI governance: 4 key survey findings

Right now, AI adoption is outpacing risk management and oversight efforts. In a Panterra and AuditBoard survey of over 400 GRC professionals, 82% say they’re using AI across functions, yet only 25% report that they have a fully implemented AI governance program. Some of this is because of a patchwork of regulations that have also lagged behind technological innovation, but part of it is also due to cultural factors.

Regardless of the reasons, organizations will need to go beyond policies and compliance checklists to truly embed AI governance into their daily operations.

Take a look at some of the survey findings below, then download your copy of the full report, From blueprint to reality: Execute effective AI governance in a volatile landscape, to get strategies for successful, sustainable governance.

AI risks and regulations

In our survey, 86% of respondents said their organization is aware of AI regulations that are coming or already in force. Many are familiar with major frameworks such as the NIST AI Risk Management Framework, the EU AI Act, and national guidelines like Canada’s Directive on Automated Decision-Making. This awareness suggests that governance is on the radar, but awareness does not equal preparedness.

Despite high levels of concern, with over 80% of respondents saying their organizations are “very” or “extremely” concerned about AI risks, implementation is still lagging. AI systems are being deployed faster than oversight structures can keep up, leading to ad hoc governance, uneven accountability, and increased exposure to legal, ethical, and operational failures.

This disconnect is emerging as a global challenge, shaped in part by uneven regulatory landscapes. In the European Union, the passage of the AI Act marks a significant shift, introducing binding obligations based on risk tiers and requiring documentation, oversight, and enforcement mechanisms. In contrast, the United States has emphasized voluntary frameworks like NIST’s, with sector-specific oversight evolving at a slower pace. The UK and Canada have taken a principles-based approach, prioritizing transparency and fairness through guidelines rather than laws.

Amid this regulatory patchwork, many organizations are gravitating toward the NIST AI RMF as a de facto standard. Though non-mandatory, 49% of surveyed organizations are aligning with it, not because they’re required to, but because they see strategic upside. The NIST framework helps companies prepare for likely regulation, signals responsibility to customers and investors, and provides internal clarity around roles and processes. For many, it functions as both a risk shield and a reputational asset.

Missing building blocks

Despite high awareness and growing concern, most organizations remain early in their AI governance journeys. Most of their efforts are focused on policy drafting, principles development, and internal messaging around responsible AI use. These steps are important, but insufficient on their own. Without integration into business workflows, technical environments, and operational routines, even the best-written policies will remain theoretical.

The gap becomes more apparent when we look at specific governance components. While organizations are investing in complex efforts like AI usage monitoring (45%), risk assessments (44%), and third-party model evaluations (40%), far fewer have implemented foundational practices. Only 28% have usage logging, 25% maintain model documentation, and just 23% enforce access controls for AI systems. Many are trying to solve the most difficult parts of governance first without a clear foundation to build on.

Cultural factors pose barriers

If governance is lagging, it’s not because organizations lack awareness or even intent. It’s because they’re confronting barriers that are far more cultural and structural than technical. In our survey, respondents identified the leading obstacles to AI governance as a lack of clear ownership, insufficient internal expertise, and resource constraints. Fewer than 15% said the main problem was a lack of tools.

Specifically, when asked to identify their top barriers to implementing AI governance, they cited a lack of clear ownership (44%), a lack of internal expertise (39%), and limited resources (34%). These top-ranked barriers outpaced any technical limitations, confirming that the most stubborn governance gaps are cultural, not technical.

This distinction matters. Most organizations are not struggling to find dashboards or compliance software; they’re struggling to determine who’s accountable, how teams should coordinate, and what workflows need to change. The issue is less about capability and more about clarity.

This is why many governance efforts stall even after policies are drafted. Policy tells the organization what should happen. Culture and structure determine whether it happens. And until organizations address the cultural gaps — unclear roles, lack of collaboration, uneven accountability — the policy-practice gap will persist.

Who owns AI governance?

One of the most persistent challenges in AI governance is not whether it’s on the executive radar — it is — but rather how responsibility for it is distributed across the organization. While nearly all organizations in our survey (96%) report some level of board or executive engagement with AI governance, this top-down interest has not translated into clear operational accountability.

This structure creates a fundamental misalignment between where AI is being built and where it should be governed. Technical leaders often focus on innovation, performance, and scalability. Compliance, ethics, and risk mitigation may be part of the conversation, but they’re rarely at the center of governance design or enforcement. And without clear accountability for integrating governance across business lines, policies often remain abstract or siloed.

The result is a governance structure that appears coherent on paper, backed by policies, executive sponsorship, and formal committees, but often lacks the operational clarity to be effective in practice. Oversight becomes fragmented not just in terms of role ownership, but also in how risks are surfaced, prioritized, and addressed across the AI lifecycle.

Without clearly defined roles, formalized handoffs, and coordinated processes between technical and risk functions, organizations are left with what might be described as "distributed responsibility without distributed accountability." And in a field as fast-moving and high-stakes as AI, that’s a serious structural vulnerability.

Working towards success

So far, organizations have largely measured the presence of AI governance in policies, such as codes of conduct, guiding principles, or ethical use agreements. These documents are good starting points, but they don’t tell you how or whether they’re being carried out. Leading organizations are baking measurements and monitoring into their operational ecosystem itself. To benchmark your team’s progress and get steps you can take to bring policy in line with practice, download your copy of From blueprint to reality: Execute effective AI governance in a volatile landscape today.

You may also like to read

image of mountains
Compliance

Best SOC 2 compliance software for long-term readiness

LEARN MORE
CSRD reporting requirements: What every compliance team must know
Compliance

CSRD reporting requirements: What every compliance team must know

LEARN MORE
image of people standing in line
Compliance

Navigate FedRAMP: A step-by-step checklist

LEARN MORE

Discover why industry leaders choose AuditBoard

SCHEDULE A DEMO
upward trending chart
confident business professional