Safeguard the Future of AI: The Core Functions of the NIST AI RMF

Emily Villanueva

March 7, 2025

Safeguard the Future of AI: The Core Functions of the NIST AI RMF

TLDR: The NIST AI Risk Management Framework (NIST AI RMF) is a voluntary guideline designed to help organizations identify, assess, and manage risks associated with artificial intelligence (AI). The framework was designed with the principles of Map, Measure, Manage, and Govern to develop trustworthy, transparent, and ethical AI systems, ensuring responsible AI adoption across various industries.

The pace of AI development is rapid as the gains in efficiencies and operations can be implemented on a wide variety of administrative tasks. This rapid velocity, as described in NIST AI RMF Tackles Unprecedented AI Risk Velocity, increases the probability of risks occurring while also generating new threats that never existed before. The NIST AI RMF serves as a vital tool for organizations aiming to harness AI responsibly, balancing technological advancement with ethical considerations. 

Overview of AI and Risk Management

Artificial intelligence (AI) refers to machines or systems that mimic human intelligence to perform tasks and can iteratively improve themselves based on the information they collect. AI systems are increasingly integral in sectors like healthcare, where they’re used for predictive diagnostics, and cybersecurity, where they help in threat detection and prevention. 

Below are examples of the growing importance of AI systems in their respective industries: 

  • Healthcare: AI aids in early disease detection and personalized treatment plans.
  • Cybersecurity: AI enhances threat detection capabilities, protecting sensitive data.
  • Finance: AI algorithms analyze market trends for better investment decisions.

Introducing the NIST AI Risk Management Framework (NIST AI RMF), which provides organizations with guidelines to address risks, improve trust, and ensure responsible AI development and deployment. This framework helps identify potential vulnerabilities and implement strategies to mitigate them.

Prioritizing and addressing risks within an organization varies by industry and the goals of the organization. An effective Enterprise Risk Management approach provides for global prioritization of goals that may or may not have to do with AI. This requires stakeholders from all teams to attend and confer on the greatest risks. The global approach provides a perspective on the downstream impact of AI system’s stakeholders as well as other threats that stakeholders may also encounter. Then you can layer on the NIST AI RMF to consider the design of the AI system with the goals of the organization and stakeholders.

Having an AI Center of Excellence committee can help the organization develop AI principles to ensure responsible AI development is cascaded throughout the organization, just as a charter might do.

Implementing the NIST AI RMF is crucial for organizations to leverage AI technologies while effectively managing associated risks, thereby fostering trust and innovation.

NIST AI RMF: Purpose and Background

The National Institute of Standards and Technology (NIST) is a U.S. federal agency under the Department of Commerce that develops technology, metrics, and standards to drive innovation and economic competitiveness. NIST is renowned for creating frameworks that help industries manage complex technological challenges, including the NIST Cybersecurity Framework (CSF). NIST’s mission aligns with the Department of Commerce’s mission to oversee economic growth, technological innovation, and job creation. NIST’s focus on standardization and technology aligns directly with this mission, particularly in areas like cybersecurity, manufacturing, and emerging technologies.

AI RMF as a Voluntary Guideline to Manage Risks

The AI RMF is a voluntary guideline originating from the Executive Order aimed at promoting trustworthy AI development practices. The guidelines were created using collaborative inputs from private and public sector researchers, including industry experts, academics, and recent college graduates. From the Public Sector, several Federal Agencies provided input, and organizations such as the Organisation for Economic Co-operation and Development (OECD) contributed to harmonizing the framework with global AI governance standards. From the Private Sector, leading firms in AI development and deployment, including IBM and Amazon Web Services, offered practical perspectives on implementing AI risk management practices. Further, groups like the Business Software Alliance (BSA) and the U.S. Chamber of Commerce provided feedback to ensure the framework’s applicability across various industries.

Priority research and additional guidance to enhance the AI RMF will be captured in an associated AI Risk Management Framework Roadmap to which NIST and the broader community can contribute to continuously. NIST hosts public workshops to gather input and discuss updates to the AI RMF. These are open to stakeholders from all sectors, and participation allows you to share feedback and insights directly. If you would like to contribute, keep an eye on NIST’s AI RMF webpage for announcements about upcoming workshops.

Emphasis on Safe and Trustworthy AI Systems

Public trust will hinge on justified assurance that government use of AI will respect privacy, civil liberties, and civil rights. The government must earn that trust and ensure that its use of AI tools is effective, legitimate, and lawful. This imperative calls for developing AI tools to enhance oversight and auditing, increasing public transparency about AI use, and building AI systems that advance the goals of privacy preservation and fairness. The AI RMF places a strong emphasis on creating safe and trustworthy AI systems. It advocates for:

  • Transparency: Clear understanding of AI processes.
  • Accountability: Responsibility for AI outcomes.
  • Ethical Considerations: Aligning AI practices with societal values.

The NIST AI RMF guides organizations in developing AI systems that are not only innovative but also safe, ethical, and trustworthy, reinforcing public confidence in AI technologies.

What are the Core Functions of the NIST AI RMF?

Throughout the below functions, the RMF core themes of bias, fairness, and transparency are present. Per Samta Kapoor, EY’s Responsible AI and AI Energy Leader, “it’s important to underline why you should be thinking about responsible AI, bias, and fairness from the design stage. Relying on regulatory intervention after the fact isn’t enough. For instance, companies can face severe reputational loss if they don’t have responsible AI principles in place. These principles must be validated by the C-suite, but also by the data scientists who are developing them.” From each function, the organization’s executive board can take away core values or principles to implement in their AI Governance Center of Excellence. The tone from the top is again a key significance in establishing a governance function that is respected and followed.

Map Function

The Map Function involves identifying risks across the AI lifecycle. Organizations assess potential risks to stakeholders, including AI actors and end-users. This function helps in understanding:

  • Data Quality Issues: Biases or inaccuracies in training data.
  • Algorithmic Risks: Potential for unintended behavior or outcomes.
  • Operational Risks: Failures in deployment environments.

The Map function consists of five categories to consider impacts on their AI systems:

  • Map 1: Context is established and understood. Organizations should define the AI system’s intended purpose, beneficial uses, applicable laws, user expectations, and potential impacts on individuals and society. This involves collaborating with diverse stakeholders to delineate acceptable deployment boundaries and manage risks.
  • Map 2: Categorization of the AI system is performed. AI systems should be categorized based on their capabilities, intended use, and potential risks. This includes assessing the system’s complexity, autonomy level, and the criticality of decisions it influences, ensuring appropriate risk management strategies are applied.
  • Map 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood. Organizations need to evaluate the AI system’s capabilities, intended usage, goals, and expected benefits and costs against relevant benchmarks. This assessment helps in understanding the system’s performance, limitations, and alignment with organizational objectives.
  • Map 4: Risks and benefits are mapped for all components of the AI system, including third-party software and data. It’s essential to identify and document risks and benefits associated with all components of the AI system, including third-party software and data. This comprehensive mapping ensures that potential vulnerabilities are recognized and managed throughout the AI lifecycle.
  • Map 5: Impacts to individuals, groups, communities, organizations, and society are characterized. Organizations should assess and document the AI system’s potential impacts on various stakeholders, including individuals, groups, communities, organizations, and society at large. This characterization aids in understanding and mitigating negative consequences while enhancing positive outcomes.

Measure Function

The Measure Function focuses on establishing metrics for assessing vulnerabilities and risk tolerance. It provides tools for continuous risk assessment, such as:

  • Performance Metrics: Accuracy, precision, recall.
  • Fairness Indicators: Evaluating biases in AI outputs.
  • Security Assessments: Identifying vulnerabilities to cyber threats.

Metrics are established and quantified to determine thresholds for monitoring systems to alert when those thresholds are crossed. The four categories of the Measure Function includes: 

  • Selecting Measurement Approaches: Organizations should choose methods and metrics to evaluate AI risks identified during the Map function, focusing on the most significant risks. It’s crucial to document any risks or trustworthiness characteristics that are not measured, along with the reasons for their exclusion.
  • Implementing Testing Procedures: Establish procedures to detect, track, and measure known risks, errors, incidents, or negative impacts. This includes defining acceptable performance limits and outlining corrective actions if the system exceeds these limits.
  • Assessing AI Actor Competency: Regularly evaluate and document the competency of AI actors to ensure effective system operation.
  • Monitoring External Inputs: Keep track of external inputs such as training data, models from other contexts, and third-party tools to assess their impact on system performance and reliability.

Manage Function

The Manage Function entails developing mitigation strategies to address negative impacts and harmful biases. This includes:

  1. Bias Mitigation: Techniques to reduce or eliminate biases in AI models.
  2. Ethical Compliance: Ensuring AI practices align with ethical standards.
  3. Incident Response Plans: Preparedness for unintended AI behaviors.

The Manage function focuses on prioritizing and responding to AI risks based on assessments from the Map and Measure functions. The four categories of guidance are:

  1. Evaluating System Purpose and Objectives: Determine whether the AI system achieves its intended purpose and objectives, and decide if its development or deployment should proceed.
  2. Balancing Risks and Benefits: Weigh the AI system’s negative risks against its benefits, considering trustworthiness characteristics and potential trade-offs.
  3. Continuous Monitoring: Regularly track and document system performance relative to trustworthiness characteristics, addressing any emerging risks throughout the AI system lifecycle.
  4. Incorporating TEVV Outputs: Utilize outputs from Testing, Evaluation, Verification, and Validation (TEVV) processes when considering risk treatment and management strategies.

Just because something can be done, doesn’t typically mean that the ethics and values set by the organization are going to be followed when so many datasets can influence a model. Don’t presume the desirability of AI when designing for the end user’s needs.

Govern Function

The Govern Function establishes AI governance for sustainable and ethical AI development. It underscores the importance of: 

  1. Data Privacy: Protecting sensitive information. 
  2. Privacy-Enhanced Technologies: Implementing tools like differential privacy. Differential privacy involves injecting noise into data before feeding it into an AI system. This makes it difficult to extract the original data from the system.‍ 
  3. Real-Time Monitoring: Ongoing oversight of AI systems for compliance and performance, including third parties.

The six categories for the Govern function include:

  1. Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.
  2. Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.
  3. Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.
  4. Organizational teams are committed to a culture that considers and communicates AI risk.
  5. Processes are in place to ensure that relevant AI actors and other stakeholders are actively engaged in the mapping, measuring, and managing of AI risks.
  6. Policies and procedures are in place to map, measure, and manage risks associated with third-party software and data and other supply chain issues.

The NIST AI RMF’s core functions, Map, Measure, Manage, and Govern, provide a comprehensive framework for organizations to identify and mitigate AI risks effectively as well as set up policies and procedures for establishing metrics to govern against.

What are the Primary Characteristics of Trustworthy AI Systems?

Trustworthy AI systems possess seven key characteristics:

  • Explainable and Interpretable Systems: AI decisions should be transparent, allowing users to understand how outcomes are derived. This ensures clarity and trust in the system’s processes.
  • Accountable & Transparent: Organizations must be open about AI use and take responsibility for the results, ensuring accountability in decision-making and transparency in operations.
  • Fair with Harmful Bias Managed: Building AI ecosystems that consider human, societal, and technical aspects, mitigating harmful biases, and ensuring equitable outcomes.
  • Safe: AI systems must ensure the safety of users and the broader community by avoiding harm, malicious actions, or unintended consequences.
  • Secure and Resilient: AI systems should be robust and resistant to attacks or disruptions, maintaining operational integrity under various conditions.
  • Privacy-Enhanced: AI must prioritize the protection of users’ personal information, ensuring data privacy and compliance with regulations to foster user trust.
  • Valid and Reliable: AI systems should consistently deliver accurate, dependable results and align with their intended purpose, ensuring reliability over time.

Trustworthiness will define the end user’s incentive to continue to input data and expect the output’s value to meet or exceed expectations. The system design should consider the range of possible outcomes and set limits on the outcomes and be transparent about the limits.

What Are Some Use Cases of NIST AI RMF?

The NIST AI RMF is serviceable across all industries and can be applied using the existing policies and procedures of an organization. As risks vary by industry, implementing the framework may look different. Here are some examples of some industries with major AI disruption:

  • Autonomous Vehicles (AV): Application of a risk management profile for traffic sign recognition in AV systems. 
  • Cybersecurity: Applying the framework to ensure fairness, privacy, and accessibility in AI-driven biometric authentication systems.
  • Healthcare: Using the framework to evaluate the fairness, interpretability, and explainability of AI systems used to support clinicians in treatment planning.
  • Supply Chain Management: Use the framework to determine the metrics necessary to monitor third-party dependencies in the AI supply chain for risks such as compromised software or malicious inputs.
  • Financial Services: The framework ensures fairness in AI systems evaluating creditworthiness, monitoring decision explainability to align with regulatory transparency requirements.

As an integrated part of an organization’s Enterprise Risk Management program, the NIST AI RMF assist organizations in deciding how they might best manage AI risk that is well-aligned with their goals, considers legal/regulatory requirements and best practices, and reflects risk management priorities.

Using the framework can assist in addressing your organization’s risks when evaluating generative AI as an option for all employees. Mapping risks like misinformation, bias, ethics and discrimination, loss of intellectual property, privacy violations, security threats, the impact on the environment, and headcount loss. One example related to intellectual property is that generative AI sometimes lack clear attribution of outputs when providing code generation and suggestions. For example, using an AI generative search uses 10 times the amount of energy of a regular search query.  

The NIST AI RMF enables organizations of all industries to adopt AI technologies confidently by capturing potential risks that are identified and managing these risks as part of their overall Enterprise Risk Management program.

NIST AI RMF 1.0 and Subcategories

The NIST AI RMF 1.0 breaks down its core functions into specific subcategories that provide detailed guidance for control implementation. This may include policy or procedural documentation to maintain or to examine and assess. Some of the key subcategories within each Function of the NIST AI RMF include:

  • Govern Subcategory: Accountability Structures Establish clear roles and responsibilities for AI risk management within the organization. This ensures that personnel are accountable for the development, deployment, and monitoring of AI systems, fostering a culture of responsibility and ethical AI practices.
  • Map Subcategory: Context Establishment Identify and document the intended purpose, scope, and context of the AI system. This involves understanding the operational environment, stakeholders, and potential impacts, which is crucial for assessing and managing risks effectively.
  • Measure Subcategory: Risk Assessment Develop and implement methods to assess the AI system’s performance, reliability, and potential risks. This includes evaluating metrics related to fairness, transparency, and security to ensure the system operates as intended and mitigates potential harm.
  • Manage Subcategory: Risk Treatment Implement strategies to address identified risks, such as modifying the AI system, enhancing controls, or developing contingency plans. This proactive approach helps in mitigating adverse outcomes and ensures the AI system aligns with organizational objectives and ethical standards.

Each core function is supported by these subcategories, which align with international standards like International Organization for Standardization (ISO), ensuring global applicability and recognition. The ISO 24368:2022 Standard on Artificial Intelligence states, “The expectation is that organizational practices are carried out in accord with ‘professional responsibility,’ defined by ISO as an approach that “aims to ensure that professionals who design, develop, or deploy AI systems and applications or AI-based products or systems, recognize their unique position to exert influence on people, society, and the future of AI.”

NIST provides crosswalk documents of shared foundations with ISO’s guidance for AI applications. The NIST AI RMF contains four functions which have categories that are similar to control objectives and subcategories which align with control procedures. 

Managing Risks Across the AI Lifecycle

Identifying risks in AI actors, algorithms, and outputs is crucial for maintaining system integrity. This involves:

  • Data Evaluation: Ensuring data quality and representativeness.
  • Algorithm Assessment: Testing for robustness and fairness.
  • Output Monitoring: Continuously checking AI decisions for accuracy and bias.

The impact on decision-making processes is significant, as AI increasingly influences critical business and societal outcomes. Using RMF tools, organizations can:

  • Evaluate Potential Risks: Anticipate and address issues proactively.
  • Implement Safeguards: Protect against unintended consequences.
  • Ensure Compliance: Align with legal and ethical standards.

Effective risk management throughout the AI lifecycle is essential for delivering reliable and ethical AI solutions that positively impact decision-making processes.

What Are Some Real-World Applications of AI RMF?

The following are examples of potential risks to stakeholders, including AI actors and end users:

  • For AI models, some risks or threats through use “providing input and receiving output” may be DDoS through overloading inputs or odd inputs creating hallucinogens. For more threats through use, see OWASP organization’s AI Exchange.
  • Other threats to map include bias based upon source content that are used in decision-making. An example is an HR platform screening out the best candidates because the algorithmic biases are trained on the wrong type of datasets. 

Examples of real-world applications displaying trustworthiness in their systems are:

  • medical diagnosis support systems with clear explanations for their reasoning,
  • self-driving cars with robust safety features and transparent decision-making processes,
  • facial recognition technology used for security purposes with high accuracy and bias mitigation,
  • language translation tools that clearly indicate potential ambiguities, and AI-powered customer service chatbots that provide consistent and accurate information while respecting user privacy.

Examples of stakeholders and ecosystem actors adopting the framework include:

  • US Department of State’s “Risk Management Profile for Artificial Intelligence and Human Rights” provides guidance for aligning AI governance with international human rights principles, addressing risks like bias, surveillance, and censorship while promoting rights-respecting practices.
  • Workday leverages the framework to align responsible AI practices across governance, risk evaluation, and third-party assessments, supported by cross-functional collaboration, operational tools, and leadership oversight to ensure trustworthy and ethical AI innovation.

AI RMF use-case profiles are implementations of the AI RMF functions, categories, and subcategories for a specific setting or application based on the requirements, risk tolerance, and resources of the Framework user. AI RMF profiles help organizations tailor their approach to managing AI risks in a way that aligns with their objectives, adheres to legal and regulatory standards, incorporates best practices, and prioritizes effective risk management.

NIST AI RMF Playbook and Future Roadmap

Overview of the NIST AI RMF Playbook

The NIST AI RMF Playbook provides:

  • Templates and Tools for Integration: Practical resources to implement the framework.
  • Support for Managing Risk Tolerance: Guidelines to align AI risk levels with organizational goals.

Roadmap for the Future

The NIST AI RMF Roadmap emphasizes continuous improvement in addressing emerging AI risks by outlining key activities for advancing the AI RMF. It highlights the necessity for ongoing collaboration between NIST and both private and public sector organizations to fill knowledge gaps and enhance practices related to AI risk management. The Roadmap identifies top priorities, such as aligning with international standards, expanding testing and evaluation efforts, developing AI RMF profiles, and providing guidance on balancing trustworthiness characteristics. By focusing on these areas, the Roadmap ensures that the AI RMF evolves in response to new challenges and technological advancements, promoting the trustworthy and responsible development and use of AI systems.

National Institute of Standards and Technology (NIST). AI RMF Knowledge Base – Roadmap. Available at: https://airc.nist.gov/AI_RMF_Knowledge_Base/Roadmap

Conclusion

The NIST AI RMF is instrumental in managing the risks of AI, providing organizations with the tools and guidance needed to develop and deploy AI responsibly.

Key Takeaways on Building Trustworthy, Interpretable, and Safe AI

  • Trustworthy AI: Built on transparency, accountability, and fairness.
  • Interpretable Systems: Essential for user understanding and trust.
  • Safety: Achieved through diligent risk management across the AI lifecycle.

The Need for Evolving Frameworks

As AI technologies advance and their use cases expand, frameworks like the AI RMF must evolve to address new risks and challenges, ensuring that AI continues to benefit society while minimizing potential harm.Ready to enhance your AI risk and compliance management? Explore CrossComply, the Compliance Management Software by AuditBoard, to streamline your organization’s adherence to frameworks like the NIST AI RMF.

Emily Villanueva

Emily Villanueva, MBA, is a Senior Manager of Product Solutions at AuditBoard. Emily joined AuditBoard from Grant Thornton, where she provided consulting services specializing in SOX compliance, internal audit, and risk management. She also spent 5 years in the insurance industry specializing in SOX/ICFR, internal audits, and operational compliance. Connect with Emily on LinkedIn.

Read More From Emily Villanueva

Discover Why AuditBoard Is
Top-Rated by Customers

Schedule a Demo