Last updated
October 7, 2025
{x} minute read
Written by
Reviewed by
Table of contents

The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a recent framework developed by the National Institute of Standards and Technology (NIST) to guide organizations across all sectors in the use of artificial intelligence (AI) and its systems. As AI continues to become implemented in nearly every sector — from healthcare to finance to national defense — it also brings new risks and concerns with it.

The increased implementation of AI technology means that ensuring trust, safety, and ethical use is one of the most important areas to consider going forward. The rapid adoption of AI technology means that businesses need to consider the risks of doing so and how they can implement AI safely and securely.

Along with leading cybersecurity principles and practices, this blog explores the newest NIST AI RMF framework, how it can be instrumental in building trustworthy AI systems, and how businesses can increase their use of AI while prioritizing data and user privacy.

Learn how UpGuard helps businesses manage risks and meet compliance standards >

What is the NIST AI Risk Management Framework (AI RMF)?

The NIST AI Risk Management Framework (AI RMF 1.0) is a comprehensive set of guidelines published in January 2023 by NIST to help organizations assess and mitigate risks associated with AI systems. It was around this time that the use of LLM (large language models) and generative AI technology like ChatGPT became the catalyst for the need to regulate or manage AI usage.

As AI technology grows more complex and its usage becomes more widespread, the potential for unintended consequences, ranging from the decision-making process to issues of transparency and accountability, drastically increases. The AI RMF is not intended as a step-by-step guide to implementing responsible AI usage but as guidance towards building effective AI management.

The NIST AI RMF provides a more structured approach to managing these risks, focusing on building trustworthiness in AI systems throughout their lifecycles, from design to development to deployment. Additionally, the framework focuses on the importance of aligning AI governance with broader ethical considerations, promoting an ecosystem where AI technology can be both innovative and safe.

In addition to the NIST AI RMF, NIST also released companion materials to assist organizations in their AI system development, including the NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, and AI RMF Crosswalk.

Why the NIST AI RMF matters for organizational security

As AI technology becomes more complex and its usage more widespread, the potential for unintended consequences, ranging from issues of transparency and accountability to flawed decision-making, drastically increases. The AI RMF provides the necessary structure to address these unique risks, making it instrumental to an organization’s security strategy.

Proactive Risk Mitigation

AI introduces new types of security risks that traditional cybersecurity principles often overlook. For instance, flawed data or algorithms can lead to harmful bias in decision-making. Additionally, AI can scan and process large amounts of personal data, which presents significant risks of privacy violations and infringements on individual rights. The AI RMF helps organizations proactively identify and secure against these specific vulnerabilities before they result in harm to people, the organization, or the broader ecosystem.

Establishing Trustworthiness

The framework lists seven characteristics of a trustworthy AI system, including being safe, secure & resilient, as well as being explainable and fair. By centering development around these characteristics, the AI RMF directly links risk management to building essential trust with customers and stakeholders. This approach builds accountability and transparency into the use of AI systems, for example, through explainability features that allow users to understand how AI decisions are made.

Regulatory Preparedness

While AI regulation is still catching up to the technology’s rapid development, the NIST AI RMF serves as a potential model for future mandates. Early adoption of the AI RMF not only positions a business as a leader in best AI practices but also prepares it for future regulatory compliance. By approaching AI from a risk-based perspective, the framework ensures that all development is managed effectively, helping organizations set the standard for ethical and safe AI system management.

Managing Business Continuity

Unmanaged AI risks can result in substantial harm to business operations, security breaches, monetary damages, or reputation loss. By providing guidance on continuous monitoring for compliance and regularly updating AI systems to address new security vulnerabilities, the AI RMF is a critical tool for maintaining the integrity, security, and performance of AI systems throughout their lifecycle.

NIST AI Risk Management Framework overview

To implement the framework effectively, organizations must first understand the core language and concepts that underpin the AI RMF. It provides a structured approach similar to a Vendor Risk Management Implementation Framework.

  • Trustworthy AI: The ultimate goal of the RMF is to ensure that AI systems consistently align with user needs and societal values. This characteristic is built upon seven pillars: being safe, secure & resilient; explainable and interpretable; privacy-enhanced; fair, with harmful bias managed; accountable & transparent; and valid & reliable.
  • AI risk: The framework defines this as the potential for an AI system to cause negative impacts. These risks are categorized into three main areas: harm to people, harm to organizations, and harm to ecosystems.
  • AI lifecycle: The full span of an AI system's existence, from initial concept and design through development, deployment, operation, and ultimate retirement. The RMF breaks this down into seven distinct stages.
  • Key AI actors: All individuals, teams, and organizations involved in the AI lifecycle. The RMF emphasizes that managing AI risk is a collective responsibility involving diverse teams at each stage.

The NIST AI RMF breaks down the use of AI and AI systems into sections and use cases, primarily focusing on the core functions of AI systems, the AI lifecycle, the risks of AI, and how to effectively build trustworthiness in AI through a series of steps.

Understanding AI risks

The NIST AI RMF acknowledges that there may be challenges in measuring AI risks. However, challenges in measuring AI risk do not mean the system itself is high or low risk — it simply means that risk metrics must be established to appropriately measure who or what is at risk.

One of the main challenges of AI is that systems may make decisions based on flawed data or algorithms, leading to harmful bias. Privacy violations are also a risk, as AI technology can scan and process large amounts of personal data, which can infringe on individual rights.

To better analyze AI risks, the NIST AI RMF categorizes the risks into three main areas to assess the potential impact of AI so organizations can understand which areas they must secure:

  1. Harm to people: Any harm to an individual, group, or society that infringes on personal rights, civil liberties, physical or physiological safety, or economic opportunity.
  2. Harm to organizations: Harm to business operations or security breaches that may impact the organization’s ability to operate, its reputation, or cause monetary damages.
  3. Harm to ecosystems: Harm to the global financial system, supply chains, or other interconnected systems and elements. Also includes harm to the natural environment, the planet, or natural resources.

Lifecycle of an AI system

AI systems go through several stages in their lifecycle, from design and development to deployment and post-deployment. Each stage presents specific challenges and risks, which risk management programs must continue to assess and monitor throughout its lifecycle.

The AI RMF guides organizations through these stages, recommending best practices such as extensive testing during development, continuous monitoring for compliance, and regular updates to address new security vulnerabilities. According to the framework, AI systems can only be safe and successful with collective responsibility and management through diverse teams at each stage.

The AI framework breaks down the AI system lifecycle into seven stages:

  1. Plan and design (Application context): Establish and document the system’s objectives, keeping in mind the legal and regulatory requirements and other ethical considerations.
  2. Collect and process data (Data and input): Gather, validate, and clean data and document characteristics of the data set with respect to key objectives and legal and ethical considerations.
  3. Build and use model (AI model): Create, develop, and select algorithms and train AI models
  4. Verify and validate (AI model): Verify, validate, and calibrate AI models
  5. Deploy and use (Task and output): Check compatibility with existing systems and verify regulatory compliance. Manage organizational and operational shifts and evaluate user experience.
  6. Operate and monitor (Application context): Continue operation of the AI system and assess impacts in relation to legal, regulatory, and ethical considerations
  7. Use or impacted by (People and planet): Seek mitigation of impacts and advocate for rights.

How to build a trustworthy AI system

The NIST AI RMF lists characteristics of trustworthy AI systems and how building systems around these characteristics can limit the negative impact of AI risks. The AI RMF offers guidance for a multifaceted approach to system development, where key AI actors across different departments collaborate to ensure the system aligns with broader human values.

This approach builds accountability and transparency into the use of AI systems, such as explainability features that allow users to understand how AI decisions are made. It also involves ensuring fairness by regularly auditing AI systems for biases and taking corrective actions where necessary.

NIST lists the necessary AI trustworthiness characteristics as follows:

  1. Safe
  2. Secure & resilient
  3. Explainable and interpretable
  4. Privacy-enhanced
  5. Fair with harmful bias managed
  6. Accountable & transparent
  7. Valid & reliable

The framework acknowledges that the interpretation of trustworthiness characteristics may differ from developer to developer so an element of human judgment must be incorporated. Additionally, organizations may face difficulty balancing all trustworthiness characteristics and may have to prioritize one over the other at times. However, the end goal of building a trustworthy AI system will be left to the joint collaboration of all key AI actors to take into account all the risks, impacts, costs, and benefits of the use of the systems.

Core functions of AI systems

The core functions of AI systems are what enable collaboration between parties during AI system risk management. These core functions lead to how organizations can build dialogue, understanding, and responsibility surrounding the management of AI objectives and risk management.

The AI RMF breaks it down into four main core functions, each with organized subcategories, and how they can be carried out to help organizations develop safer and more reliable AI systems.

1. Govern

Govern refers to establishing the foundational policies, structures, and practices for responsible AI within an organization. It sets the tone for risk management by ensuring that internal leaders and stakeholders have policies that establish the company’s mission, values, culture, and risk tolerance.

  • Focus: Establishing the foundational structures, policies, and priorities for responsible AI.
  • Key action: Aligning AI strategy with organizational risk tolerance, legal requirements, and ethical values (the "Why" and "Who").
  • Implementation: Implementing a risk management culture, creating comprehensive policies that address data privacy and algorithmic transparency, and setting up oversight bodies or committees.

2. Map

Map focuses on understanding the ecosystem in which the AI operates and providing the necessary context of the AI risks. Without properly identifying the risks themselves in relation to the environment in which they operate, adequate risk management is nearly impossible.

  • Focus: Contextualizing and identifying AI risks.
  • Key action: Discovering the full AI system landscape, identifying potential harms, sources of bias, and vulnerabilities across the system lifecycle (the "What" and "Where").
  • Implementation: Managing data sources and quality (data governance), documenting the design and configuration of AI systems, and regularly assessing the risks associated with AI operations.

3. Measure

The Measure function aims to evaluate the effectiveness of the risk management functions using key metrics for tracking trustworthiness, impact, and functionality. This ensures that the AI systems align with the organization’s objectives and compliance requirements.

  • Focus: Analyzing, tracking, and evaluating AI risks and trustworthiness.
  • Key action: Developing metrics, conducting rigorous testing and validation (V&V), and using evidence to assess performance against risk targets (the "How Much").
  • risk management functions using key metrics to trackImplementation: Regularly assessing the effectiveness of implemented policies and controls, conducting compliance audits, and rigorously testing software against risks outlined in the Map. Test results must be formalized and reported to relevant stakeholders.

4. Manage

Manage takes the results derived from Map and Measure to ensure resources are properly allocated and applied after system deployment. This function is critical for maintaining the integrity, security, and performance of AI systems throughout their lifecycle.

  • Focus: Allocating resources and taking action to address or mitigate AI risks.
  • Key action: Prioritizing and implementing controls, responding to incidents, and setting up feedback loops for continuous improvement (the "How to Fix").
  • Implementation: Developing and implementing strategies to reduce identified risks, continuously monitoring the AI system’s performance, establishing processes to quickly respond to security breaches or failures, and using feedback mechanisms to continuously improve governance and operational processes.

How can businesses implement the NIST AI RMF?

Implementing the NIST AI RMF requires a strategic approach tailored to the specific needs and challenges of each organization. The framework is intended to be adapted by organizations of all sizes and can be used as a template for their own AI risk management program. 

Here are the key actionable initiatives businesses can take, broken down into three phases.

1. Define and integrate AI governance

This initial phase focuses on establishing the "Govern" function—the foundational structures and policies necessary to align AI strategy with organizational values and legal requirements.

A. Establish an AI Oversight Body

Create a dedicated cross-functional team (legal, engineering, risk management, and ethics) to oversee all AI initiatives and conduct a current state assessment. This body should be the central authority for setting the organization's risk tolerance and defining accountability for AI outcomes.

B. Formalize AI Policy

Develop and document clear internal policies that define acceptable use, risk tolerance, and compliance mandates. This includes outlining procedures for ethical usage, data privacy, and algorithmic transparency. Assess legal compliance with relevant AI regulations and ethical compliance with data privacy and human rights values and principles.

C. Integrate with Existing GRC

Connect AI RMF processes with existing Governance, Risk, and Compliance (GRC) programs. Instead of creating a siloed AI risk program, leverage existing frameworks (like enterprise risk management, see Implementing an Enterprise Risk Management Framework) to ensure AI is managed consistently across the organization. Conduct new training for AI teams and relevant stakeholders on these integrated policies.

2. Execute continuous risk assessment (Map and Measure)

This phase corresponds to the "Map" and "Measure" functions, focusing on understanding the system context and quantifying the risks before deployment.

A. Contextual Mapping

For every new AI project, conduct thorough mapping to clearly define the intended use case, the operating environment, and the potential for harm to people or the ecosystem. This involves identifying the flow of data, the interdependencies between components, and the connection to the organization’s broader operational processes.

B. Develop Trustworthiness Metrics

Establish measurable Key Performance Indicators (KPIs) for the seven trustworthy AI characteristics (e.g., specific metrics for fairness, accuracy, and explainability) that are relevant to the specific AI system.

C. Implement Verification and Validation (V&V) Testing

Conduct rigorous, independent testing and validation against defined risk metrics before deployment. Rigorous testing must be formally reported to relevant stakeholders and decision-makers. Use these results to complete critical decision-making processes, primarily whether or not to proceed with the development of the AI system.

3. Implement and iterate risk controls (Manage)

This final phase aligns with the "Manage" function, ensuring resources are properly allocated and applied, and setting up a system for continuous improvement.

A. Prioritize Mitigation Strategies

Develop and implement strategies to reduce identified risks, such as using bias mitigation techniques or additional cybersecurity measures. Resources must be properly allocated and applied after the system is deployed.

B. Establish Monitoring and Incident Response

Deploy continuous monitoring tools to assess impacts, detect system drift, bias, or performance decay post-deployment. Establish processes to quickly and effectively respond to security breaches, operational failures, or any other incidents affecting AI systems.

C. Build a Feedback Loop

Implement feedback mechanisms to continuously improve AI governance, management, and operational processes based on performance and compliance assessments. Document and report on all AI activities to ensure accountability and transparency.

What the NIST AI RMF means for AI regulation

The NIST AI RMF serves as a major step forward for AI management and can even act as a potential model for future AI regulations. For businesses, this means that early adoption of the AI RMF not only prepares them for future regulatory compliance but also positions them as leaders in best AI practices.

AI regulation has yet to catch up to current AI technology because of the rapid development and sensitive nature of AI systems. Because of the nature of AI, regulations must be future-proof to ensure that new developments in AI technology won’t outgrow the nature of the legislation to the point where loopholes can be found.

The NIST AI RMF aims to address these potential issues by approaching AI from a risk-based perspective to ensure that all future development of AI systems is managed effectively. By using this framework and tool, organizations can slowly begin setting the standard for AI system management to ensure that systems are developed responsibly, ethically, and safely to protect human rights and privacy.

Future AI regulations should take a similar approach to regulating the use of AI. Essentially, it comes down to how to minimize AI risks throughout the development cycle, how businesses should continually update their AI systems, how to maintain accountability and transparency, and how to ensure that data processed through AI systems are done in an ethical and safe manner.

AI RMF vs. cybersecurity frameworks: A strategic distinction

Organizations that already utilize established frameworks, such as the NIST Cybersecurity Framework (CSF), must understand how the AI RMF provides necessary, non-redundant strategic context. These two frameworks are designed to work together to ensure both the security and trustworthiness of an AI system.

                                                                                                                                 
FrameworkPrimary FocusScope
NIST Cybersecurity Framework (CSF)Protecting data, networks, and systems from malicious actors or operational failure.The IT system and data environment around the model (Confidentiality, Integrity, Availability - CIA).
NIST AI RMFAddressing the risks inherent to the model and its outcomes, including bias, lack of explainability, and potential societal harm.The AI model's entire lifecycle and its impact on the human experience.

The Need for Both

A common misconception is that standard cybersecurity measures are enough to manage AI risk. However, the AI RMF addresses risks that the CSF does not:

  • Bias and fairness: The CSF protects the integrity of data, while the AI RMF ensures that the data and the resulting model are fair and equitable in their decisions.
  • Explainability: The CSF ensures system access is secure; the AI RMF ensures the system's decision-making process is interpretable and accountable to users.

In practice, organizations must use the CSF for the security of the AI system (e.g., protecting the model from external data poisoning attacks or unauthorized access) and the AI RMF for the trustworthiness of the AI system (e.g., ensuring the model's outputs are ethical, safe, and aligned with human values). By utilizing this framework and tool, organizations can gradually establish a standard for AI system management, ensuring that systems are developed responsibly, ethically, and safely to protect human rights and privacy.

Related posts

Learn more about the latest issues in cybersecurity.