The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a recent framework developed by The National Institute of Standards and Technology (NIST) to guide organizations across all sectors in the use of artificial intelligence (AI) and its systems. As AI continues to become implemented in nearly every sector — from healthcare to finance to national defense — it also brings new risks and concerns with it.

The increased implementation of AI technology means that ensuring trust, safety, and ethical use is one of the most important areas to consider going forward. The rapid adoption of AI technology means that businesses need to consider the risks of doing so and how they can implement AI safely and securely.

Along with leading cybersecurity principles and practices, this blog explores the newest NIST AI RMF framework, how it can be instrumental in building trustworthy AI systems, and how businesses can increase their use of AI while prioritizing data and user privacy.

Learn how UpGuard helps businesses manage risks and meet compliance standards >

What is the NIST AI Risk Management Framework (AI RMF)?

The NIST AI Risk Management Framework (AI RMF 1.0) is a comprehensive set of guidelines published in January 2023 by NIST to help organizations assess and mitigate risks associated with AI systems. It was around this time that the use of LLM (large language models) and generative AI technology like ChatGPT became the catalyst for the need to regulate or manage AI usage.

As AI technology grows more complex and its usage becomes more widespread, the potential for unintended consequences, ranging from the decision-making process to issues of transparency and accountability, drastically increases. The AI RMF is not intended as a step-by-step guide to implementing responsible AI usage but as guidance towards building effective AI management.

The NIST AI RMF provides a more structured approach to managing these risks, focusing on building trustworthiness in AI systems throughout their lifecycles, from design to development to deployment. Additionally, the framework focuses on the importance of aligning AI governance with broader ethical considerations, promoting an ecosystem where AI technology can be both innovative and safe.

In addition to the NIST AI RMF, NIST also released companion materials to assist organizations in their AI system development, including the NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, and AI RMF Crosswalk.

NIST AI Risk Management Framework Overview

The NIST AI RMF breaks down the use of AI and AI systems into sections and use cases, primarily focusing on the core functions of AI systems, the AI lifecycle, the risks of AI, and how to effectively build trustworthiness in AI through a series of steps.

Understanding AI Risks

The NIST AI RMF acknowledges that there may be challenges in measuring AI risks. However, challenges in measuring AI risk do not mean the system itself is high or low risk — it simply means that risk metrics must be established to appropriately measure who or what is at risk.

One of the main challenges of AI is that systems may make decisions based on flawed data or algorithms, leading to harmful bias. Privacy violations are also a risk, as AI technology can scan and process large amounts of personal data, which can infringe on individual rights.

To better analyze AI risks, the NIST AI RMF categorizes the risks into three main areas to assess the potential impact of AI so organizations can understand which areas they must secure:

  1. Harm to people — Harm to an individual, group, or society that infringes on personal rights, civil liberties, physical or physiological safety, or economic opportunity.
  2. Harm to organizations — Harm to business operations or security breaches that may impact the organization’s ability to operate, its reputation, or cause monetary damages.
  3. Harm to ecosystems — Harm to the global financial system, supply chains, or other interconnected systems and elements. Also includes harm to the natural environment, the planet, or natural resources.

Lifecycle of an AI System

AI systems go through several stages in their lifecycle, from design and development to deployment and post-deployment. Each stage presents specific challenges and risks, which risk management programs must continue to assess and monitor throughout its lifecycle.

The AI RMF guides organizations through these stages, recommending best practices such as extensive testing during development, continuous monitoring for compliance, and regular updates to address new security vulnerabilities. According to the framework, AI systems can only be safe and successful with collective responsibility and management through diverse teams at each stage.

The AI framework breaks down the AI system lifecycle into seven stages:

  1. Plan and design (Application context): Establish and document the system’s objectives, keeping in mind the legal and regulatory requirements and other ethical considerations.
  2. Collect and process data (Data and input): Gather, validate, and clean data and document characteristics of the data set with respect to key objectives and legal and ethical considerations.
  3. Build and use model (AI model): Create, develop, and select algorithms and train AI models
  4. Verify and validate (AI model): Verify, validate, and calibrate AI models
  5. Deploy and use (Task and output): Check compatibility with existing systems and verify regulatory compliance. Manage organizational and operational shifts and evaluate user experience.
  6. Operate and monitor (Application context): Continue operation of the AI system and assess impacts in relation to legal, regulatory, and ethical considerations
  7. Use or impacted by (People and planet): Seek mitigation of impacts and advocate for rights.

How to Build a Trustworthy AI System

The NIST AI RMF lists characteristics of trustworthy AI systems and how building systems around these characteristics can limit the negative impact of AI risks. The AI RMF offers guidance for a multifaceted approach to system development, where key AI actors across different departments collaborate to ensure the system aligns with broader human values.

This approach builds accountability and transparency into the use of AI systems, such as explainability features that allow users to understand how AI decisions are made. It also involves ensuring fairness by regularly auditing AI systems for biases and taking corrective actions where necessary.

NIST lists the necessary AI trustworthiness characteristics as follows:

  1. Safe
  2. Secure & resilient
  3. Explainable and interpretable
  4. Privacy-enhanced
  5. Fair with harmful bias managed
  6. Accountable & transparent
  7. Valid & reliable

The framework acknowledges that the interpretation of trustworthiness characteristics may differ from developer to developer so an element of human judgment must be incorporated. Additionally, organizations may face difficulty balancing all trustworthiness characteristics and may have to prioritize one over the other at times. However, the end goal of building a trustworthy AI system will be left to the joint collaboration of all key AI actors to take into account all the risks, impacts, costs, and benefits of the use of the systems.

Core functions of AI systems

The core functions of AI systems are what enable collaboration between parties during AI system risk management. These core functions lead to how organizations can build dialogue, understanding, and responsibility surrounding the management of AI objectives and risk management.

The AI RMF breaks it down into four main core functions, each with organized subcategories, and how they can be carried out to help organizations develop safer and more reliable AI systems.

1. Govern

Govern refers to the overarching management of AI activities within the organization, emphasizing clear governance structures, policies, and practices. Govern sets the tone for the organization, ensuring that internal leaders and key stakeholders have policies that establish the company’s mission, values, culture, procedures, priorities, and risk tolerance.

This function ensures that AI technologies are developed in accordance with ethical standards, legal requirements, and organizational values. Key aspects include:

  • Risk management culture: Implements a culture of risk management and impact assessment within the organization.
  • Policy development: Creating comprehensive policies that address data privacy, algorithmic transparency, and ethical usage of AI.
  • Oversight and accountability: Setting up oversight bodies or committees that have the authority to oversee AI projects, ensuring that they adhere to established guidelines and are accountable for their outcomes.
  • Stakeholder engagement: Involving all key stakeholders in the decision-making process to ensure that the AI systems are inclusive and consider multiple perspectives.

2. Map

Map focuses on understanding the ecosystem in which AI operates and provides the necessary context of the AI risks. Without a proper understanding of the risks themselves in relation to the environment in which they operate, it’s nearly impossible to conduct adequate risk management processes.

This core function involves identifying the flow of data, the interactions between different components of AI systems, identifying the interdependencies between the components, and how these components connect to the organization’s broader operational processes. Ultimately, key stakeholders use the information gained from this function to complete critical decision-making processes, primarily whether or not to proceed with the development of the AI system.

The results from Map directly influence the subsequent core functions of Measure and Manage. Key aspects of this function include:

  • Data governance: Managing data sources, data quality, and data integrity to ensure that the AI system’s outputs are reliable and accurate.
  • System configuration: Documenting the design and configuration of AI systems, including the algorithms that were used and their specific purposes.
  • Risk assessment: Regularly assessing (and anticipating) the risks associated with AI operations, including potential biases, failures, or security vulnerabilities.

3. Measure

The Measure function aims to evaluate the effectiveness of the risk management functions using key metrics for tracking trustworthiness, impact, and functionality. This involves setting benchmarks and metrics to assess how well AI systems align with the organization’s objectives and compliance requirements. Components include:

  • Performance assessment: Regularly reviewing and assessing the effectiveness of implemented policies and controls to ensure they are functioning as intended.
  • Compliance audits: Regular audits must be conducted to verify that AI practices comply with regulatory standards and ethical guidelines.
  • Rigorous testing: Software must be rigorously tested and benchmarked against risks outlined in Map. Test results must be formalized and reported to relevant stakeholders and decision-makers.

4. Manage

Manage takes the results derived from Map and Measure to ensure resources are properly allocated and applied after system deployment. This function is critical for maintaining the integrity, security, and performance of AI systems throughout their lifecycle. Key management practices involve:

  • Risk mitigation: Developing and implementing strategies to reduce identified risks, such as bias mitigation techniques or additional cybersecurity measures.
  • Performance monitoring: Continuously monitoring the AI system’s performance against its intended goals and making adjustments to address any deviations.
  • Incident response: Establishing processes to quickly and effectively respond to security breaches, operational failures, or any other incidents affecting AI systems.
  • Improvement processes: Implementing feedback mechanisms to continuously improve AI governance, management, and operational processes based on performance and compliance assessments.

How can businesses implement the NIST AI RMF?

Implementing the NIST AI RMF requires a strategic approach tailored to the specific needs and challenges of each organization. It begins with a thorough understanding of the framework and its implications for AI governance and management. The NIST AI RMF is intended to be adapted by organizations of all sizes and can be used as a template for their own AI risk management program.

Businesses can take the following initiatives to help get their AI programs off the ground:

  • Understand the AI RMF
  • Conduct a current state assessment
  • Define the AI governance structure
  • Develop an AI risk management strategy
  • Implement control measures
  • Monitor and evaluate AI systems
  • Identify security gaps in AI risk management
  • Conduct new training for AI teams and relevant stakeholders
  • Build a feedback loop for continuous development
  • Document and report on all AI activities
  • Assess legal compliance with relevant AI regulations
  • Assess ethical compliance with data privacy and human rights values and principles

What the NIST AI RMF means for AI regulation

The NIST AI RMF serves as a major step forward for AI management and can even act as a potential model for future AI regulations. For businesses, this means that early adoption of the AI RMF not only prepares them for future regulatory compliance but also positions them as leaders in best AI practices.

AI regulation has yet to catch up to current AI technology because of the rapid development and sensitive nature of AI systems. Because of the nature of AI, regulations must be future-proof to ensure that new developments in AI technology won’t outgrow the nature of the legislation to the point where loopholes can be found.

The NIST AI RMF aims to address these potential issues by approaching AI from a risk-based perspective to ensure that all future development of AI systems is managed effectively. By using this framework and tool, organizations can slowly begin setting the standard for AI system management to ensure that systems are developed responsibly, ethically, and safely to protect human rights and privacy.

Future AI regulations should take a similar approach to regulating the use of AI. Essentially, it comes down to how to minimize AI risks throughout the development cycle, how businesses should continually update their AI systems, how to maintain accountability and transparency, and how to ensure that data processed through AI systems are done in an ethical and safe manner.

Ready to see
UpGuard in action?

Ready to save time and streamline your trust management process?