NIST AI RMF Principle: Manage

The “Manage” function of the NIST AI RMF highlights the importance of a continuous cycle of risk management. 

This means regularly monitoring, evaluating, and refining your controls to adapt to new challenges and ensure that your AI systems remain secure and effective. 

Monitor and Track AI System Performance Post-Deployment

Continuous monitoring is crucial to identify performance degradation, adversarial attacks, and any unexpected AI system behavior. 

Hence, monitor AI systems for impacts that may compromise their integrity and performance over time. This includes tracking dataset modifications and ensuring transparency in data collection and processing.

Here is how you can implement this step:

  • Set Clear KPIs. Start by defining some key performance indicators that really matter. Consider metrics like accuracy, response time, and how satisfied users are with the system.
  • Use Real-Time Monitoring Tools. Invest in tools that let you track performance continuously. This way, you can spot any unusual behavior right away.
  • Schedule Regular Check-Ins. Make it a habit to review the system’s performance periodically. This helps you see if it’s hitting those KPIs and where there might be room for improvement.
  • Gather User Feedback. Don’t forget to ask users how they feel about the AI system. Their insights can help you identify any issues and areas that need attention.
  • Look for Patterns in the Data. Take some time to analyze the performance data over time. This can help you spot trends, potential problems, or even opportunities to make things better.
  • Be Ready to Adjust. If you notice anything off, be prepared to tweak algorithms or processes based on what you find and the feedback you receive.
  • Keep Records of Changes. Document any adjustments you make, along with the reasons behind them and how they impact performance..
  • Conduct Post-Deployment Audits. Schedule audits to ensure everything aligns with your standards and best practices. 

Implement Controls and Safeguards

Managing risks associated with AI systems is crucial, and it requires a thoughtful approach to putting controls and safeguards in place. The goal here is to boost the reliability and trustworthiness of your AI technologies while keeping vulnerabilities to a minimum. 

The Manage function of the NIST AI RMF highlights the importance of a continuous cycle of risk management. 

Actionable Steps for Implementing AI Controls and Safeguards

Review applicable laws and regulations governing AI technologies in your industry to ensure compliance and address any legal liabilities.

Legal and regulatory requirements complementing the NIST AI RMF

General Data Protection Regulation (GDPR)Governs data protection and privacy in the EU.
Requires strict guidelines on data processing and individual rights.
Health Insurance Portability and Accountability Act (HIPAA):Protects health information in the U.S.
Mandates privacy and security standards for AI in healthcare.
Federal Trade Commission (FTC) Act:Prohibits unfair or deceptive practices in commerce.
Requires transparency and ethical AI use.
Equal Credit Opportunity Act (ECOA)Prevents discrimination in credit transactions.
Ensures fairness in AI decision-making for lending.
Algorithmic Accountability ActProposed legislation requiring assessments of automated decision systems.
Aims to identify and mitigate biases in AI.
California Consumer Privacy Act (CCPA)Enhances privacy rights for California residents.
Requires transparency in data collection practices.
International Organization for Standardization (ISO) StandardsOffers international guidelines, including for AI and data management.
Helps demonstrate compliance with best practices.
Sector-Specific RegulationsVaries by industry (e.g., GLBA for financial services, FERPA for education)

Establish Incident Response Plans

Incidents can arise in various forms when dealing with AI, whether it’s bias in an algorithm, a system malfunction, or a security breach involving sensitive data. 

Having a structured incident response plan ensures that your organization can act quickly to minimize damage and get the AI system back on track when something goes wrong.

Some of the steps to establish incident response plans for AI systems are:

1. Pinpoint Potential AI Incidents

Start by identifying the types of incidents that could impact your AI systems. For example, what happens if the AI model unintentionally introduces bias into its decisions? 

Or, how would you handle a data breach caused by a third-party integration? Knowing the specific risks your AI system faces helps tailor your response plan.

2. Assign Clear Roles and Responsibilities

Every response plan needs a well-defined team. Who will lead when an AI-related issue arises? Make sure roles are clear, whether it’s the AI developers who can fix technical problems, the compliance team managing legal aspects, or the communication team who will update stakeholders.

3. Set Up Communication Protocols

It’s important to have clear lines of communication during an AI incident. Who needs to be informed, and when? 

Whether it’s customers, internal teams, or regulatory authorities, make sure the right people are notified promptly. Transparency is key in building trust during a crisis.

4. Conduct AI-specific Drills and Simulations

Testing your response plan regularly is crucial. Organize mock drills focusing on AI incidents, like bias or model failures, so your team knows how to react in real situations. These exercises keep everyone on their toes and reveal any gaps in the plan.

5. Build Recovery Plans

After the immediate incident is handled, how do you restore trust? If an AI system gives incorrect results, how will you address the aftermath? 

Create a recovery plan that not only gets the system back up and running but also communicates with users about the issue and solutions.

6. Continuously Refine Your Plan

Incident response isn’t a set-it-and-forget-it task. After each incident, review what worked and what didn’t. Were there delays in communicating with stakeholders? Was the issue detected too late? 

Make sure to update the plan based on lessons learned so it evolves alongside your AI system.

Frequently asked questions

What are the NIST requirements for AI?

The NIST AI RMF outlines requirements for developing and deploying trustworthy AI systems, focusing on reliability, safety, security, transparency, accountability, and fairness. Organizations must also establish governance frameworks to ensure compliance with ethical and regulatory standards for an effective AI risk management.

Which US agency is responsible for the AI risk management framework?

The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, is responsible for the AI Risk Management Framework (AI RMF). NIST develops and promotes measurement standards and technology to enhance innovation and industrial competitiveness. The agency collaborates with various stakeholders to ensure the framework’s relevance and applicability across different sectors.

When did NIST release the AI risk management framework?

NIST released the AI Risk Management Framework (AI RMF) on January 26, 2023.

Does NIST AI RMF have a certification?

Currently, the NIST AI RMF does not offer a formal certification. Instead, it serves as a guideline and best practices framework for organizations to align their AI risk management practices with. However, organizations can demonstrate compliance and adherence to the framework through self-assessments, third-party audits, and by implementing the recommended practices.

Who can perform NIST AI assessments?

NIST AI assessments can be performed by qualified internal teams, third-party auditors, or consultants with expertise in AI risk management and the NIST AI RMF. I.S. Partners offers a complete package of services to help organizations implement the AI RMF standards according to their industry requirements.

Get started

Get a quote today!

Fill out the form to schedule a free, 30-minute consultation with a senior-level compliance expert today!

Analysis of your compliance needs
Timeline, cost, and pricing breakdown
A strategy to keep pace with evolving regulations

Great companies think alike.

Join hundreds of other companies that trust I.S. Partners for their compliance, attestation and security needs.

Scroll to Top