NIST AI RMF Tackles Unprecedented AI Risk Velocity

NIST AI RMF Tackles Unprecedented AI Risk Velocity

As artificial intelligence (AI) continues to reshape the business landscape, organizations are grappling with the velocity of adoption and the emerging business risks it creates. Enter the National Institute of Standards and Technology (NIST) and their AI Risk Management Framework (AI RMF). However, while this framework aims to bring order to the chaos, it underscores the fundamental challenge: efforts to regulate and manage AI are constantly racing against its unprecedented risk velocity.

In this blog, I’ll explore how the NIST AI Risk Management Framework (AI RMF) addresses the unprecedented velocity of AI-related risks. We’ll examine the rapid timeline of AI advancements, understand what risk velocity means in the AI context, and analyze how the framework helps organizations navigate this complex landscape.

The AI Revolution: A Timeline of Breakneck Progress

To understand the challenge facing CISOs, risk managers, and policymakers, you only need to look at the breathtaking pace of AI advancements that preceded the release of the first version of the NIST AI RMF:

  • In June 2020, OpenAI released GPT-3, setting a new benchmark for natural language generation and conversational AI.
  • January 2021 saw the introduction of DALL·E, capable of generating images from text descriptions.
  • By July 2021, Google’s DeepMind had released AlphaFold 2, revolutionizing protein structure prediction and accelerating biomedical research.
  • April 2022 brought DALL·E 2, with even more advanced image generation capabilities.
  • MidJourney launched its beta in July 2022, further pushing the boundaries of AI-generated art.
  • March 2023 saw the release of GPT-4, bringing multi-modal capabilities and more nuanced responses.

This timeline illustrates the rapid advancement of AI technology and the increasing diversity of its business applications, from language processing to visual arts to scientific research.

Understanding Risk Velocity in the AI Era

This rapid pace of development introduces a crucial concept for risk managers: risk velocity. Risk velocity refers to the speed at which a risk can impact an organization. In the context of AI, this velocity is unprecedented.  Consider how quickly the impact of misuse of ChatGPT was felt by Samsung after the release of ChatGPT or  the fines levied against two New York lawyers for case law hallucinations. It is clear that a new AI model, particularly AI copilots, can be deployed globally within days or even hours of its development. Its impacts — both positive and negative — can be felt almost immediately. This high-velocity risk environment poses unique challenges:

  1. Traditional risk assessment methods may be too slow to capture emerging AI risks.
  2. The window for implementing mitigating controls is drastically shortened.
  3. The potential for cascading effects is amplified, as one AI system can quickly influence or be integrated into others.

The NIST AI RMF: A Framework in Perpetual Motion

The NIST AI RMF, released in its first complete version on March 30, 2023, represented a significant step forward in addressing the challenges posed by AI technologies. 

However, its very existence underscores the breakneck pace at which AI is evolving and the urgent need for robust risk management strategies. So much so that on October 30, 2023, US President Biden issued an executive order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” directly referencing the NIST framework and mandating the development of a companion resource to the AI Risk Management Framework for generative AI within 270 days. This tight timeline highlights the government’s recognition of the rapid pace of AI development and the need for equally swift regulatory responses.

269 days later, on July 26, 2024, NIST introduced a Generative AI Profile in response to the explosive growth of technologies like GPT-4 and DALL·E 2. This rapid follow-up and the Executive Order’s role in its development illustrate the constant game of catch-up that organizations and regulatory bodies are playing with AI advancements.

What Is the NIST AI RMF?

The AI Risk Management Framework (AI RMF), developed by the National Institute of Standards and Technology (NIST), is a voluntary and flexible resource to help organizations manage the risks of AI systems. Created under the National Artificial Intelligence Initiative Act of 2020, the framework promotes trustworthy and responsible AI development. NIST, a leading U.S. agency in technology and standards, crafted the AI RMF to be rights-preserving, non-sector-specific, and adaptable, equipping organizations of all sizes and sectors to navigate AI risks effectively increase AI system trustworthiness and foster responsible AI practices. 

Essentially, it provides a structured but flexible approach to identifying, assessing, and mitigating risks throughout the entire lifecycle of an AI system to improve its trustworthiness based on seven essential characteristics:

  1. Valid and Reliable: AI systems should consistently produce accurate and dependable results across various conditions and inputs.
  2. Safe: AI systems must be designed to minimize risks and potential harm to users and society.
  3. Secure and Resilient: AI systems should be protected against unauthorized access and cyberattacks and able to maintain functionality in adverse conditions.
  4. Accountable and Transparent: AI systems and their developers should be open about capabilities, limitations, and decision-making processes, allowing for clear responsibility attribution.
  5. Explainable and Interpretable: Humans should understand and trace the reasoning behind AI decisions and outputs.
  6. Privacy-Enhanced: AI systems must protect personal data and respect user privacy throughout data collection, processing, and storage.
  7. Fair with Harmful Bias Managed: AI systems should treat all individuals equitably, with active measures to identify and mitigate unfair biases in their inputs, operations, and outputs.

The seven essential characteristics of trustworthy AI systems.

These characteristics provide a solid foundation for risk assessment of AI systems using the three categories of Harm (Harm to People, organizations, and Ecosystems) laid out in the framework. However, they also reveal the complexity of the task at hand. As AI capabilities expand, our understanding of these characteristics must evolve. What was considered “explainable” yesterday might be opaque in the face of tomorrow’s complex AI systems. 

The Core of the NIST AI RMF

Similar to the NIST CSF, the NIST AI RMF has four interconnected functions at its core: Govern, Map, Measure, and Manage. These functions form a continuous risk management cycle, emphasizing the need for ongoing vigilance and adaptation. 

AI RMF Core Functions

NIST breaks down AI risk management into four core functions:

  1. Map: Assess your AI system’s purpose, stakeholders, data usage, and potential risks. It’s like creating a battle plan before you charge into the AI fray.
  2. Measure: Regularly test your AI systems to ensure they behave as expected. This is like giving your AI a performance review but with more data and fewer awkward conversations.
  3. Manage: Develop and implement strategies to mitigate risks. It’s like playing whack-a-mole with AI risks but with a strategic plan.
  4. Govern: Establish the rules of engagement. Create policies, procedures, and governance structures for AI development and deployment. It’s like being the sheriff in an AI wild west.

Embracing the Race

As AI continues to evolve rapidly, the NIST AI RMF provides CISOs and tech risk managers with a flexible framework to navigate the complex landscape of AI risks and keep pace. The key to success lies in viewing the AI RMF not as a one-time implementation but as an ongoing process of adaptation and refinement. As AI capabilities expand and new challenges emerge, our approach to risk management must evolve in tandem.

As we’ve seen from the timeline of AI advancements, new AI capabilities are being adopted by businesses than we expect in the world of AI. Our risk management strategies must be just as swift and just as innovative as the technology we’re seeking to govern.

Claude

Claude Mandy is the Chief Evangelist for Data Security at Symmetry Systems, where he focuses on innovation and industry engagement while leading efforts to evolve how modern data security is viewed and used in the industry. Prior to Symmetry, he spent 3 years at Gartner as a senior director. He brings firsthand experience in building information security, risk management, and privacy advisory programs with global scope.