Finding the Right Owner for AI Risk

Finding the Right Owner for AI Risk

As artificial intelligence (AI) reshapes industries at an unprecedented pace, managing its risks has become a top priority for forward-thinking organizations. While cybersecurity and regulatory compliance have long-standing frameworks, AI presents unique challenges that demand fresh approaches to risk management.

In this blog, I’ll explore the evolving landscape of AI risk, which extends beyond technical concerns to include regulatory, reputational, workforce, and operational impacts. Drawing from recent insights into the current industry approaches to  AI risk ownership — we’ll discuss how a collaborative governance model, such as an AI Risk Committee, can bring together essential perspectives from security, HR, legal, ethics, and risk management.

The Importance of Risk Ownership

Every successful company understands the importance of having clear, accountable risk management. Almost all other risk types have clearly defined owners—cybersecurity risk is overseen by the CISO, regulatory compliance by the General Counsel, and financial risk by the CFO. Yet, one emerging risk lacks clear ownership: the risks associated with artificial intelligence (AI).

The challenge with AI is that its use is becoming ubiquitous, impacting multiple dimensions within an organization—from data privacy and intellectual property to ethical concerns and operational reliability. With AI embedded across various functions, the risks become complex and multifaceted, requiring a cohesive strategy and ownership to address them comprehensively.

Like other risks, organizations attempting to harness AI to drive insights, automation, and innovation within an organization while managing its risks effectively require a defined leader and clear accountability, which is still absent in many organizations.

The Current Ownership

According to a recent KPMG survey, there is a stark divide in how organizations are approaching AI risk ownership. Equal numbers of respondents (24%) said the CEO is the primary owner of AI risk in their organization, compared to those who assigned that role to the Chief Information Security Officer (CISO). This lack of consensus highlights a fundamental challenge in AI governance – no clear consensus exists on where this crucial responsibility should lie.

The division between CEO and CISO ownership reflects a broader truth: AI risk isn’t just a technical challenge or a strategic one—it’s both and more. While the CISO often emerges as a strong candidate due to their technical expertise, and the CEO’s involvement signals the strategic importance of AI, the reality is that AI risk spans multiple domains, requiring a coordinated approach across various organizational functions.

The KPMG survey also revealed that most leaders expect mandatory AI audits within the next few years as regulatory bodies like the UK’s Department for Science, Innovation and Technology (DSIT) work to establish frameworks to govern AI deployment. However, most organizations lack the internal expertise to conduct these audits, with only 19% saying they have the necessary skills today. This suggests that AI adoption and maturity are outpacing companies’ ability to assess and manage the associated risks effectively.

The Multi-Dimensional Nature of AI Risk

The challenge of managing AI risk extends far beyond technical considerations. It’s a complex web of interconnected challenges that touches every aspect of an organization. Understanding these dimensions is crucial for establishing effective governance and ownership.

The Human Element: HR Risks

AI transformation is already fundamentally reshaping the workplace. HR teams find themselves balancing workforce preparation with employee protection. They must address skills gaps, manage potential displacement, and ensure fair, transparent use of AI in HR processes like hiring and performance assessment. Yet, KPMG’s report reveals a troubling disconnect. Only 44% of C-suite executives help create AI-related processes, and 33% develop governance to reduce AI risk. This gap between leadership awareness and active engagement poses significant risks to workforce management.

The Technical Front: Cybersecurity Risks

AI systems present unique security challenges that transcend traditional frameworks. Models can be poisoned, extracted, or fooled in subtle yet devastating ways. The AI infrastructure—from training data to data pipelines to deployment platforms—creates new attack surfaces. Third-party dependencies in the AI supply chain further complicate security, requiring organizations to trust not just their own systems but an entire ecosystem of AI components and services.

The Moral Compass: Ethical Risks

As AI systems increasingly impact human lives, organizations face critical ethical challenges. Algorithmic bias, model transparency, and privacy concerns aren’t just technical issues but moral imperatives. The “black box” nature of many AI systems clashes with our obligation to explain decisions that affect people’s lives. Organizations must balance AI’s powerful capabilities with fundamental human rights to privacy and autonomy. Organizations rolling out AI capabilities without appropriate consent to leverage user data take huge risks while compliance frameworks play catchup.

The Regulatory Landscape: Legal Risks

Indeed, the legal and regulatory landscape surrounding AI is increasingly complex and dynamic. There has been a clear increase in AI regulations globally during 2024, driven by concerns over data privacy, accountability, and ethical AI practices. For example, in the U.S., the White House issued an Executive Order on AI in November 2023, establishing a federal oversight framework emphasizing transparency, fairness, and accountability. This move, alongside the U.S. AI Bill of Rights, reflects the growing importance of ethical standards and protections for AI users and aims to align AI innovations with societal values​

In Europe, the EU’s AI Act, anticipated to become law by the end of 2024, enforces a risk-based approach to AI oversight, requiring high-risk AI applications to meet stringent transparency and accountability standards. This framework could set a precedent for global AI regulations, with other countries likely to follow similar models​

Keeping it All Running: Operational Risks

Operational risks emerge as organizations integrate AI systems into their core processes. Teams must ensure AI system reliability, maintain business continuity, and develop robust quality control measures. This requires careful monitoring of system performance and the development of fallback procedures when AI systems fail or produce unexpected results.

A Collaborative Approach to AI Risk Management

This multifaceted nature of AI risk explains why finding the right owner is so challenging. No single department or executive can effectively manage all these dimensions alone. Organizations need a coordinated approach that combines expertise from across the business, supported by clear governance structures and strong executive leadership. The solution isn’t about finding a single owner – it’s about creating an integrated framework that addresses each risk dimension while maintaining clear accountability and communication channels.

A recommended approach is a collaborative AI Risk Committee involving leaders from security, legal, HR, ethics, and risk management functions. This committee allows for a specialized focus on technical security, regulatory compliance, workforce impacts, and ethical considerations coordinated under a unified risk framework.

Within this collaborative framework, roles need clear definitions while maintaining flexibility. The CISO oversees technical security and data protection, legal teams manage regulatory compliance, HR addresses workforce concerns, and ethics officers ensure responsible AI practices. This coordinated approach allows the organization’s risk officer to oversee the broader strategy, ensuring all dimensions are adequately addressed.

Conclusion

The future of AI risk management isn’t about finding a single owner; it’s about creating an integrated approach that reflects AI’s multi-dimensional impact. While the CISO’s role is vital, and the CEO is ultimately accountable for all Enterprise risks, managing AI risk requires a cross-functional strategy supported by clear governance structures and effective communication.

Organizations adopting this collaborative model will be better positioned to harness AI’s benefits while minimizing risks. As AI continues to evolve, this comprehensive approach to risk management will be critical to long-term success.

Claude

Claude Mandy is the Chief Evangelist for Data Security at Symmetry Systems, where he focuses on innovation and industry engagement while leading efforts to evolve how modern data security is viewed and used in the industry. Prior to Symmetry, he spent 3 years at Gartner as a senior director. He brings firsthand experience in building information security, risk management, and privacy advisory programs with global scope.