
October 1, 2025 • 13 min read
AI risk management: Frameworks, threats, and controls

Celene Ennia
In the next three years, 92% of companies plan to increase their AI investments, according to McKinsey.
It makes sense — companies now have access to natural language processing (NLP) frameworks that help intelligent solutions better understand humans, as well as generative AI (GenAI) toolsets capable of creating net-new content (and getting better as they do it). The result? The sky’s the limit for AI technologies.
As noted by the Harvard Law School Forum on Corporate Governance, 60% of S&P 500 companies believe the use of AI presents material risks. Data from research firm Gartner, meanwhile, predicts that 40% of emerging agentic AI projects will be canceled by 2027, in part due to inadequate risk controls.
This creates an operational paradox for organizations. While AI investment is necessary to keep pace in changing markets, the same investment carries inherent risk. Spend too little, and you may fall behind. Let AI loose, and you may spend more time putting out fires than finding new insights.
The solution? A structured approach to AI risk management processes. In this piece, we’ll define the basics of AI risk management, explore some common types of AI risk, examine current and emerging risk management frameworks, and offer practical advice to implement effective AI practices.
What is AI risk management?
AI risk management is the practice of identifying potential harms, mitigating their potential impact, and creating strategies to ensure they do not recur.
Managing AI risks requires a combination of technologies, best practices, and operational principles. Risk mitigation approaches are more effective when deployed as clearly articulated frameworks, rather than as in-situ responses to problems as they emerge.
Consider a data analysis tool that leverages AI to improve the speed and accuracy of outputs. This tool carries the risk of bias — depending on the data used, it may reach conclusions that appear accurate but are missing critical context. Simply flagging a single result as incorrect, however, does not mitigate this risk. Instead, companies need to find and select a framework, such as NIST RMF or ISO 42001, to identify the source of this bias to prevent future problems.
The unique risks of AI
AI poses unique risks because it evolves over time, creating new vulnerabilities in turn.
For example, a first-generation chatbot programmed to answer a set of structured questions can only fail within these limits. If customers ask a question that isn’t in the chatbot’s database, it will simply not answer or will escalate the interaction to a human agent.
GenAI-powered digital agents, however, are capable of continual AI development. Each interaction provides more data and offers more context, in turn improving AI outputs. The challenge? This learning isn’t linear. Tools may draw the wrong conclusions from data or improperly weight human responses to create likely outcomes. The result? Identified risks can quickly change or evolve, requiring a systematic approach to find and eliminate them.
Why it matters in 2025
AI spending is on the rise, with investments expected to increase 60% year-over-year in 2025. According to UBS, the result is an AI market worth $480 billion in 2026, and it shows no signs of slowing down.
Add in the development of established tools such as ChatGPT and AI advancements such as DeepSeek, it’s clear companies must navigate an AI market that is going nowhere but up — and risk is going the same direction.
Types of AI risks to monitor
There are several types of AI risks that companies should monitor. They include privacy and data governance, bias and ethics in models, and operational and security risks. Here’s a look at each in more detail:
Privacy and data governance
Data privacy plays a key role in regulatory compliance. If business or customer data is unintentionally or intentionally disclosed, this puts companies at risk of non-compliance with privacy legislation, such as the California Consumer Protection Act (CCPA), the EU’s General Data Privacy Regulation (GDPR), or Brazil’s General Data Protection Law (LGPD). AI can unintentionally expose this data if it is not given clear instructions for disclosure and clear guidelines about what can be disclosed and what must be kept secure.
Lack of data governance also creates a potential AI risk. Data governance defines when, how, and why data can be used. Healthcare information is a good example. Under rules such as the Health Insurance Portability and Accountability Act (HIPAA), anonymized data can be used to discover and track demographic trends. AI excels at this type of function, but if data governance rules are lacking, tools may use personal data without the express consent of patients.
Bias and ethics in models
Bias is also a common concern with AI tools. This bias can be inherent or learned — for example, if teams train AI models on limited sets of training data that share similar characteristics, outputs are inherently biased.
It’s also possible for AI to learn biased behavior when given access to larger data sets. This isn’t predictable; instead, teams must regularly check AI outputs and compare them to known quantities. If bias is discovered, AI models should be retrained.
Ethics, meanwhile, often speak to model transparency. If companies can’t see what’s happening inside AI tools, it becomes difficult to trust outputs, especially if they do not align with other data analysis. Biased or non-ethical tools can lead to risks such as unfair hiring or termination processes, which may put companies at risk of violating state laws such as those from California, Colorado, Utah, or Texas.
The Colorado Consumer Protections for Artificial Intelligence Act comes into effect on February 1, 2026, and it requires developers of AI tools to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in high-risk systems.”
Operational and security risks
AI also creates potential operational and security risks.
Consider the integration of AI tools. The broader the integration, the more effective the tool. If, however, companies do not define privacy and access policies, this can lead to operational risk. A well-meaning staff member might use AI to map employee engagement trends. Because the tool has no defined rules around access, the employee may inadvertently gain visibility into personal, financial, or medical records of other staff.
Outputs are also a source of risk. This often occurs when tools are not given clear rulesets around security. Users may be able to circumvent AI controls by asking them to pretend or dream, in turn granting access to sensitive information or critical corporate data.
AI risk management frameworks
Frameworks are the most effective way to manage AI risk. While there are no universal models for risk assessment and management, several frameworks are in development.
NIST AI RMF
The National Institute of Standards and Technology (NIST) AI risk management framework (AI RMF) is designed to help companies and stakeholders understand the risks, impacts, and potential harms of AI. It also provides recommendations for responsible AI uses, management, metrics, risk tolerance, and risk prioritization methodologies.
Use AuditBoard’s NIST AI RMF checklist to help streamline framework adoption.
ISO 42001 and EU AI Act
The ISO 42001 standard is a standard that defines consistent requirements for establishing, implementing, maintaining, and improving AI systems while minimizing risk.
The EU AI Act, meanwhile, will define required standards for all businesses using AI and operating in the European Union. These standards for secure, ethical, and interpretable AI are still in development, and the Act does not yet have an implementation date.
How to implement controls for AI risk
Implementing effective controls for AI risk requires a combination of people and proof.
People: Create cross-functional ownership
Many business functions have a single owner. For example, IT security is typically handled by infosec teams, with occasional assistance from development or operations.
This approach won’t work to create trustworthy AI because the scope and scale of AI tools make it impossible for a single team to take on all responsibility. Instead, companies should take a cross-functional approach that includes infosec, legal, engineering, and risk teams.
Proof: Use auditing and evidence collection
Confidence in AI outputs is unwarranted unless companies can show proof. This proof is critical to verify output accuracy and provide evidence of transparency.
Regular auditing and consistent evidence collection provide this proof. Audits demonstrate that tools are working as intended and are providing expected outputs. Evidence collection provides explainability by showing what data was used in decision-making, how AI applied this data, and what conclusions were reached. While it’s possible to handle these tasks manually, businesses are often better served using IT compliance automation solutions to reduce errors and streamline key processes.
How AuditBoard can help manage emerging tech risk
With AuditBoard, businesses are better equipped to manage emerging tech risk and stay ahead of evolving AI governance trends.
Flexible risk register and control mapping
AI risks and security controls aren’t static. Instead, they evolve alongside intelligent solutions. AuditBoard enables teams to tailor risk registers and control mappings for emerging technologies like AI, with configurable workflows that support model governance, bias tracking, and documentation of AI lifecycle risks.
Integrated reporting
Enterprise risk management software plays a key role in AI risk reduction. With AuditBoard, you can extend the impact of RMS with integrated reporting. Connect your AI risk posture to the enterprise risk dashboard for complete visibility into current operations and potential pitfalls.
Ready to manage AI risk with confidence?
AI adoption and AI risks go hand in hand. The more data companies collect, analyze, and apply using AI, the greater the potential risk.
This is because the AI lifecycle is self-evolving. Without consistent and comprehensive management strategies, companies put themselves at risk of using inaccurate data, applying biased results, or failing to meet regulatory requirements.
The best advice for managing AI risk? Start today. Implement incremental measures to improve protection, then work with trusted partners to build an AI risk reduction strategy that aligns with business goals, meets current regulatory requirements, and sets up your organization for long-term success.
Build confidence in your AI strategy with connected risk management workflows. See how AuditBoard supports AI governance.
About the authors

Celene Ennia is a Product Marketing Manager of ITRC Solutions at AuditBoard with a robust background in IT audit and compliance. Previously at A-LIGN, she held a range of IT audit roles and oversaw a team to conduct audits for SOC 2, SOC 1, HIPAA, and other key standards, and now applies her expertise to develop data-driven, customer-focused marketing strategies at AuditBoard.
You may also like to read


Step forward: Forge your future on the path to connected risk maturity

Build a business continuity plan: Why cross-training matters

How Wise’s risk and internal audit teams enhanced their value-delivering capabilities

Step forward: Forge your future on the path to connected risk maturity

Build a business continuity plan: Why cross-training matters
Discover why industry leaders choose AuditBoard
SCHEDULE A DEMO
