The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a recent framework developed by the National Institute of Standards and Technology (NIST) to guide organizations across all sectors in the use of artificial intelligence (AI) and its systems. As AI continues to become implemented in nearly every sector — from healthcare to finance to national defense — it also brings new risks and concerns with it.
The increased implementation of AI technology means that ensuring trust, safety, and ethical use is one of the most important areas to consider going forward. The rapid adoption of AI technology means that businesses need to consider the risks of doing so and how they can implement AI safely and securely.
Along with leading cybersecurity principles and practices, this blog explores the newest NIST AI RMF framework, how it can be instrumental in building trustworthy AI systems, and how businesses can increase their use of AI while prioritizing data and user privacy.
Learn how UpGuard helps businesses manage risks and meet compliance standards >
The NIST AI Risk Management Framework (AI RMF 1.0) is a comprehensive set of guidelines published in January 2023 by NIST to help organizations assess and mitigate risks associated with AI systems. It was around this time that the use of LLM (large language models) and generative AI technology like ChatGPT became the catalyst for the need to regulate or manage AI usage.
As AI technology grows more complex and its usage becomes more widespread, the potential for unintended consequences, ranging from the decision-making process to issues of transparency and accountability, drastically increases. The AI RMF is not intended as a step-by-step guide to implementing responsible AI usage but as guidance towards building effective AI management.
The NIST AI RMF provides a more structured approach to managing these risks, focusing on building trustworthiness in AI systems throughout their lifecycles, from design to development to deployment. Additionally, the framework focuses on the importance of aligning AI governance with broader ethical considerations, promoting an ecosystem where AI technology can be both innovative and safe.
In addition to the NIST AI RMF, NIST also released companion materials to assist organizations in their AI system development, including the NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, and AI RMF Crosswalk.
As AI technology becomes more complex and its usage more widespread, the potential for unintended consequences, ranging from issues of transparency and accountability to flawed decision-making, drastically increases. The AI RMF provides the necessary structure to address these unique risks, making it instrumental to an organization’s security strategy.
AI introduces new types of security risks that traditional cybersecurity principles often overlook. For instance, flawed data or algorithms can lead to harmful bias in decision-making. Additionally, AI can scan and process large amounts of personal data, which presents significant risks of privacy violations and infringements on individual rights. The AI RMF helps organizations proactively identify and secure against these specific vulnerabilities before they result in harm to people, the organization, or the broader ecosystem.
The framework lists seven characteristics of a trustworthy AI system, including being safe, secure & resilient, as well as being explainable and fair. By centering development around these characteristics, the AI RMF directly links risk management to building essential trust with customers and stakeholders. This approach builds accountability and transparency into the use of AI systems, for example, through explainability features that allow users to understand how AI decisions are made.
While AI regulation is still catching up to the technology’s rapid development, the NIST AI RMF serves as a potential model for future mandates. Early adoption of the AI RMF not only positions a business as a leader in best AI practices but also prepares it for future regulatory compliance. By approaching AI from a risk-based perspective, the framework ensures that all development is managed effectively, helping organizations set the standard for ethical and safe AI system management.
Unmanaged AI risks can result in substantial harm to business operations, security breaches, monetary damages, or reputation loss. By providing guidance on continuous monitoring for compliance and regularly updating AI systems to address new security vulnerabilities, the AI RMF is a critical tool for maintaining the integrity, security, and performance of AI systems throughout their lifecycle.
To implement the framework effectively, organizations must first understand the core language and concepts that underpin the AI RMF. It provides a structured approach similar to a Vendor Risk Management Implementation Framework.
The NIST AI RMF breaks down the use of AI and AI systems into sections and use cases, primarily focusing on the core functions of AI systems, the AI lifecycle, the risks of AI, and how to effectively build trustworthiness in AI through a series of steps.
The NIST AI RMF acknowledges that there may be challenges in measuring AI risks. However, challenges in measuring AI risk do not mean the system itself is high or low risk — it simply means that risk metrics must be established to appropriately measure who or what is at risk.
One of the main challenges of AI is that systems may make decisions based on flawed data or algorithms, leading to harmful bias. Privacy violations are also a risk, as AI technology can scan and process large amounts of personal data, which can infringe on individual rights.
To better analyze AI risks, the NIST AI RMF categorizes the risks into three main areas to assess the potential impact of AI so organizations can understand which areas they must secure:
AI systems go through several stages in their lifecycle, from design and development to deployment and post-deployment. Each stage presents specific challenges and risks, which risk management programs must continue to assess and monitor throughout its lifecycle.
The AI RMF guides organizations through these stages, recommending best practices such as extensive testing during development, continuous monitoring for compliance, and regular updates to address new security vulnerabilities. According to the framework, AI systems can only be safe and successful with collective responsibility and management through diverse teams at each stage.
The AI framework breaks down the AI system lifecycle into seven stages:
The NIST AI RMF lists characteristics of trustworthy AI systems and how building systems around these characteristics can limit the negative impact of AI risks. The AI RMF offers guidance for a multifaceted approach to system development, where key AI actors across different departments collaborate to ensure the system aligns with broader human values.
This approach builds accountability and transparency into the use of AI systems, such as explainability features that allow users to understand how AI decisions are made. It also involves ensuring fairness by regularly auditing AI systems for biases and taking corrective actions where necessary.
NIST lists the necessary AI trustworthiness characteristics as follows:
The framework acknowledges that the interpretation of trustworthiness characteristics may differ from developer to developer so an element of human judgment must be incorporated. Additionally, organizations may face difficulty balancing all trustworthiness characteristics and may have to prioritize one over the other at times. However, the end goal of building a trustworthy AI system will be left to the joint collaboration of all key AI actors to take into account all the risks, impacts, costs, and benefits of the use of the systems.
The core functions of AI systems are what enable collaboration between parties during AI system risk management. These core functions lead to how organizations can build dialogue, understanding, and responsibility surrounding the management of AI objectives and risk management.
The AI RMF breaks it down into four main core functions, each with organized subcategories, and how they can be carried out to help organizations develop safer and more reliable AI systems.
Govern refers to establishing the foundational policies, structures, and practices for responsible AI within an organization. It sets the tone for risk management by ensuring that internal leaders and stakeholders have policies that establish the company’s mission, values, culture, and risk tolerance.
Map focuses on understanding the ecosystem in which the AI operates and providing the necessary context of the AI risks. Without properly identifying the risks themselves in relation to the environment in which they operate, adequate risk management is nearly impossible.
The Measure function aims to evaluate the effectiveness of the risk management functions using key metrics for tracking trustworthiness, impact, and functionality. This ensures that the AI systems align with the organization’s objectives and compliance requirements.
Manage takes the results derived from Map and Measure to ensure resources are properly allocated and applied after system deployment. This function is critical for maintaining the integrity, security, and performance of AI systems throughout their lifecycle.
Implementing the NIST AI RMF requires a strategic approach tailored to the specific needs and challenges of each organization. The framework is intended to be adapted by organizations of all sizes and can be used as a template for their own AI risk management program.
Here are the key actionable initiatives businesses can take, broken down into three phases.
This initial phase focuses on establishing the "Govern" function—the foundational structures and policies necessary to align AI strategy with organizational values and legal requirements.
Create a dedicated cross-functional team (legal, engineering, risk management, and ethics) to oversee all AI initiatives and conduct a current state assessment. This body should be the central authority for setting the organization's risk tolerance and defining accountability for AI outcomes.
Develop and document clear internal policies that define acceptable use, risk tolerance, and compliance mandates. This includes outlining procedures for ethical usage, data privacy, and algorithmic transparency. Assess legal compliance with relevant AI regulations and ethical compliance with data privacy and human rights values and principles.
Connect AI RMF processes with existing Governance, Risk, and Compliance (GRC) programs. Instead of creating a siloed AI risk program, leverage existing frameworks (like enterprise risk management, see Implementing an Enterprise Risk Management Framework) to ensure AI is managed consistently across the organization. Conduct new training for AI teams and relevant stakeholders on these integrated policies.
This phase corresponds to the "Map" and "Measure" functions, focusing on understanding the system context and quantifying the risks before deployment.
For every new AI project, conduct thorough mapping to clearly define the intended use case, the operating environment, and the potential for harm to people or the ecosystem. This involves identifying the flow of data, the interdependencies between components, and the connection to the organization’s broader operational processes.
Establish measurable Key Performance Indicators (KPIs) for the seven trustworthy AI characteristics (e.g., specific metrics for fairness, accuracy, and explainability) that are relevant to the specific AI system.
Conduct rigorous, independent testing and validation against defined risk metrics before deployment. Rigorous testing must be formally reported to relevant stakeholders and decision-makers. Use these results to complete critical decision-making processes, primarily whether or not to proceed with the development of the AI system.
This final phase aligns with the "Manage" function, ensuring resources are properly allocated and applied, and setting up a system for continuous improvement.
Develop and implement strategies to reduce identified risks, such as using bias mitigation techniques or additional cybersecurity measures. Resources must be properly allocated and applied after the system is deployed.
Deploy continuous monitoring tools to assess impacts, detect system drift, bias, or performance decay post-deployment. Establish processes to quickly and effectively respond to security breaches, operational failures, or any other incidents affecting AI systems.
Implement feedback mechanisms to continuously improve AI governance, management, and operational processes based on performance and compliance assessments. Document and report on all AI activities to ensure accountability and transparency.
The NIST AI RMF serves as a major step forward for AI management and can even act as a potential model for future AI regulations. For businesses, this means that early adoption of the AI RMF not only prepares them for future regulatory compliance but also positions them as leaders in best AI practices.
AI regulation has yet to catch up to current AI technology because of the rapid development and sensitive nature of AI systems. Because of the nature of AI, regulations must be future-proof to ensure that new developments in AI technology won’t outgrow the nature of the legislation to the point where loopholes can be found.
The NIST AI RMF aims to address these potential issues by approaching AI from a risk-based perspective to ensure that all future development of AI systems is managed effectively. By using this framework and tool, organizations can slowly begin setting the standard for AI system management to ensure that systems are developed responsibly, ethically, and safely to protect human rights and privacy.
Future AI regulations should take a similar approach to regulating the use of AI. Essentially, it comes down to how to minimize AI risks throughout the development cycle, how businesses should continually update their AI systems, how to maintain accountability and transparency, and how to ensure that data processed through AI systems are done in an ethical and safe manner.
Organizations that already utilize established frameworks, such as the NIST Cybersecurity Framework (CSF), must understand how the AI RMF provides necessary, non-redundant strategic context. These two frameworks are designed to work together to ensure both the security and trustworthiness of an AI system.
A common misconception is that standard cybersecurity measures are enough to manage AI risk. However, the AI RMF addresses risks that the CSF does not:
In practice, organizations must use the CSF for the security of the AI system (e.g., protecting the model from external data poisoning attacks or unauthorized access) and the AI RMF for the trustworthiness of the AI system (e.g., ensuring the model's outputs are ethical, safe, and aligned with human values). By utilizing this framework and tool, organizations can gradually establish a standard for AI system management, ensuring that systems are developed responsibly, ethically, and safely to protect human rights and privacy.