Core Functions: Govern, Map, Measure, Manage

The AI RMF is built around four essential functions: Govern, Map, Measure, and Manage. These core functions offer a structured and practical approach to handling AI risks.

Here’s a quick look at what each one does:

NIST AI RMF Core FunctionDetails
GovernThis function ties everything together, offering the structure and support needed to manage AI risks. It helps set up the systems, teams, and processes necessary to build a solid foundation for risk management.

At the core of this function are the following requirements:

• GOVERN 1. Policies, processes, and procedures for mapping, measuring, and managing AI risks are established and transparent, ensuring they’re implemented effectively across the organization.
• GOVERN 2. Accountability is clear, with the right teams and individuals trained and empowered to handle AI risks.
• GOVERN 3. Diversity, equity, inclusion, and accessibility are prioritized throughout the AI lifecycle.
• GOVERN 4. Teams are committed to creating a culture of awareness and open communication about AI risks.
• GOVERN 5. Strong processes are in place to engage with relevant AI stakeholders effectively.
GOVERN 6. There are robust policies for managing risks related to third-party software, data, and other supply chain concerns.

Key aspects of implementation include:

• Create a team for risk management to manage AI risks throughout AI’s lifecycle.
• Building a strong risk management culture within your organization.
• Developing clear policies and procedures to guide risk management efforts.
• Integrating feedback mechanisms to identify and address risks as they arise.
MapThe Map function, the first crucial step in the AI lifecycle, highlights the need to gather input from a wide range of people. This means hearing from internal teams, external collaborators, end users, and anyone else affected by the AI system. Because the more perspectives you include, the better you can understand the risks involved.

The subcategories of this function include:

• MAP 1. The context of the AI system is fully understood.
• MAP 2. The AI system is categorized properly.
• MAP 3. AI capabilities, goals, and costs are compared to relevant benchmarks.
• MAP 4. Risks and benefits are mapped across all parts of the AI system, including third-party elements.
MAP 5. The impacts on individuals, communities, and society are clearly identified.

Key aspects of implementation  include:

Define your organization’s AI risk tolerance.
Assess AI capabilities, usage, and goals for alignment with objectives.
Map risks across the AI system, including third-party elements.
Set processes to regularly update risk assessments as the AI evolves.
MeasureThe measure focuses on testing AI systems before deployment and throughout their operation. Regular testing helps you maintain an up-to-date understanding of how the system functions and whether it remains trustworthy over time.

The subcategories under Measure are:

• MEASURE 1. Identify and apply the right methods and metrics.
• MEASURE 2. Evaluate AI systems for characteristics that build trust.
• MEASURE 3. Implement systems to track AI risks over time.
• MEASURE 4. Gather feedback on how well your measurement processes work and assess their effectiveness.

Key aspects of implementation include:

Defining metrics for how you’ll measure AI risk and the effectiveness of your controls.
Setting up monitoring systems to track AI performance and identify risks as they arise.
Develop mechanisms to report on and provide feedback regarding the trustworthiness of your AI systems.
ManageThe MANAGE function involves regularly allocating resources to address the risks that have been identified and measured based on how the GOVERN function defines them. This function includes having clear plans in place for how to respond to, recover from, and communicate about any incidents or events that may occur.

The subcategories under Manage are:

• MANAGE 1. AI risks identified in the map and measure stages are prioritized and handled appropriately.
• MANAGE 2. Strategies to maximize AI’s benefits and minimize risks are planned, documented, and informed by expert input.
• MANAGE 3. Risks associated with third-party AI tools and resources are actively managed.
• MANAGE 4. Risk response, recovery plans, and communication strategies are regularly updated and monitored.

Key aspects of implementation include:

Documenting and prioritizing AI risks based on their significance.
Aligning your risk management strategies with the broader goals of the organization.
Implementing strong, tech-enabled systems for handling incidents and issues related to AI.
Setting up clear evaluation and monitoring processes, particularly for third-party AI resources.

Frequently asked questions

What are the NIST requirements for AI?

The NIST AI RMF outlines requirements for developing and deploying trustworthy AI systems, focusing on reliability, safety, security, transparency, accountability, and fairness. Organizations must also establish governance frameworks to ensure compliance with ethical and regulatory standards for an effective AI risk management.

Which US agency is responsible for the AI risk management framework?

The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, is responsible for the AI Risk Management Framework (AI RMF). NIST develops and promotes measurement standards and technology to enhance innovation and industrial competitiveness. The agency collaborates with various stakeholders to ensure the framework’s relevance and applicability across different sectors.

When did NIST release the AI risk management framework?

NIST released the AI Risk Management Framework (AI RMF) on January 26, 2023.

Does NIST AI RMF have a certification?

Currently, the NIST AI RMF does not offer a formal certification. Instead, it serves as a guideline and best practices framework for organizations to align their AI risk management practices with. However, organizations can demonstrate compliance and adherence to the framework through self-assessments, third-party audits, and by implementing the recommended practices.

Who can perform NIST AI assessments?

NIST AI assessments can be performed by qualified internal teams, third-party auditors, or consultants with expertise in AI risk management and the NIST AI RMF. IS Partners offers a complete package of services to help organizations implement the AI RMF standards according to their industry requirements.

Get started

Get a quote today!

Fill out the form to schedule a free, 30-minute consultation with a senior-level compliance expert today!

Analysis of your compliance needs
Timeline, cost, and pricing breakdown
A strategy to keep pace with evolving regulations

Great companies think alike.

Join hundreds of other companies that trust IS Partners for their compliance, attestation and security needs.

mcl logodentaquest-4AGM logoTRC Logo final_Colorpaymedia-logo-1nlex-logo
Scroll to Top