NIST AI RMF Maintenance, Monitoring & Continuous Improvement

A solid plan for monitoring is required to catch any emerging risks that could surface after deployment. 

This includes setting up feedback mechanisms to gather insights from stakeholders about your AI system’s performance. Additionally, regularly refining your Testing, Evaluation, Validation, and Verification (TEVV) processes ensures you’re always prepared to tackle new risks as they arise.

Conduct Regular Audits and Assessments

Regular audits and assessments are crucial to maintaining compliance with the NIST AI RMF. These processes ensure that AI systems are functioning as intended and that risks are managed continuously. 

Expert auditors are recommended to guide the conduct of an assessment for AI compliance. Below are actionable steps aligned with the key functions of the NIST AI RMF (GOVERN, MAP, MEASURE, MANAGE) to guide you through the audit process:

1. Define Audit Scope (MAP)

When conducting audits under the MAP function of the NIST AI RMF, starting with a well-defined scope is crucial. Here’s what you need to do:

  • Identify Key Risks. Begin by mapping out the risks associated with your AI systems, considering operational, legal, and ethical factors.
  • Inventory AI Systems. Keep an updated record of all AI systems, third-party components, and datasets in use. This will help you determine what needs auditing.
  • Set Audit Objectives. Align your audit goals with organizational risk tolerance and stakeholder expectations. Decide which AI system aspects, such as fairness, bias, security, or performance, will be the focus.

2. Establish Metrics for Evaluation (MEASURE)

To effectively measure and evaluate your AI systems, set clear, actionable metrics. Here’s what you need to do:

  • Develop Performance Indicators. Set measurable metrics for evaluating AI system performance. This includes accuracy, fairness, bias detection, and resilience to adversarial attacks.
  • Risk Indicators. Define thresholds and risk indicators, such as data quality issues, system drift, or changes in behavior, that could signal the need for further assessment.
  • Data Provenance Check. Ensure the data used by AI systems is continuously tracked and assessed for quality, relevance, and integrity.

3. Implement Continuous Monitoring (MANAGE)

Managing AI risks effectively involves continuous oversight to ensure your systems remain trustworthy and compliant. Monitoring is not just a one-time task—it’s an ongoing process that ensures your AI systems stay aligned with your risk management goals. Here are the key steps to implement continuous monitoring:

  • Monitor System Trustworthiness. Regularly assess the system’s reliability, fairness, and compliance with applicable laws. Use feedback mechanisms to catch performance issues early.
  • Validate and Verify Changes. After any updates, test and validate the system for consistency with the established risk management controls. Document any changes or decommissioning components if necessary.
  • Evaluate Third-Party Contributions. Ensure third-party models and data continue to meet organizational standards. Review documentation and any updates from external suppliers to manage third-party risks.

Regularly Update Practices and Policies

Keeping your AI governance policies current is crucial for ensuring accountability and staying ahead of potential risks. As AI systems evolve, governance should, too. Here’s a practical approach to help you manage this:

  • Clarify Accountability. Ensure clear oversight responsibilities for technical, legal, and compliance teams.
  • Revise Regularly: Update governance frameworks to align with new regulations, system changes, or emerging risks.
  • Establish a Review Schedule: Set a regular review cycle, such as quarterly or annually, for policy updates.
  • Gather Feedback from Users: Collect insights from AI users to refine policies.
  • Monitor Regulatory Changes: Stay updated on legislative changes to ensure compliance.
  • Conduct Gap Analyses: Compare current policies with best practices to identify and close gaps.
  • Test Policies in Controlled Environments: Pilot updates in a controlled setting before organization-wide implementation.

Develop Metrics And Reporting Mechanisms

Developing metrics and reporting mechanisms is essential for effective AI governance and risk management. Here are actionable steps to guide this process:

1. List the Key Metrics

You have already identified in the previous section. Here are the metrics you need to monitor post-implementation:

  • Fairness. Measure disparities in outcomes across different demographic groups.
  • Accuracy. Track the percentage of correct predictions or decisions made by the AI system.
  • Transparency. Monitor the availability and clarity of decision-making explanations.
  • Robustness. Evaluate how well the AI system handles edge cases or disruptions.
  • Bias. Conduct regular bias assessments to detect and mitigate any unintended discrimination.
  • Compliance. Verify adherence to legal and regulatory standards.
  • Security. Track potential vulnerabilities and breaches.

2. Integrate Metrics Into Policies

Embed the identified metrics into organizational policies, processes, and procedures to ensure accountability across all AI lifecycle stages. Make these policies accessible and understandable to all relevant stakeholders, including technical teams and management.

3. Develop Reporting Mechanisms

Create clear reporting formats and schedules to communicate metric results to relevant stakeholders. 

Moreover, ensure that the reporting mechanisms are transparent and allow for quick identification of issues or areas needing improvement.

Some examples of reporting mechanisms include:

  • Dashboards. Interactive, real-time displays that provide visual insights into key metrics such as accuracy, fairness, and system performance. 
  • Monthly or Quarterly Reports. Written reports that summarize AI system performance over a set period. 
  • Automated Alerts. Instant notifications via email or internal communication tools (e.g., Slack, Microsoft Teams) when key performance metrics fall below set thresholds or when critical incidents (e.g., security breaches) occur.
  • Audit Trails/Logs. Detailed logs capturing AI decisions, system updates, and changes in metrics over time. 

4. Monitor and Review Metrics Regularly

Establish a regular review schedule for the metrics and reports to ensure they remain relevant and effective over time. Involve a cross-functional team to assess metric performance and recommend adjustments based on evolving organizational needs and risks.

Frequently asked questions

What are the NIST requirements for AI?

The NIST AI RMF outlines requirements for developing and deploying trustworthy AI systems, focusing on reliability, safety, security, transparency, accountability, and fairness. Organizations must also establish governance frameworks to ensure compliance with ethical and regulatory standards for an effective AI risk management.

Which US agency is responsible for the AI risk management framework?

The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, is responsible for the AI Risk Management Framework (AI RMF). NIST develops and promotes measurement standards and technology to enhance innovation and industrial competitiveness. The agency collaborates with various stakeholders to ensure the framework’s relevance and applicability across different sectors.

When did NIST release the AI risk management framework?

NIST released the AI Risk Management Framework (AI RMF) on January 26, 2023.

Does NIST AI RMF have a certification?

Currently, the NIST AI RMF does not offer a formal certification. Instead, it serves as a guideline and best practices framework for organizations to align their AI risk management practices with. However, organizations can demonstrate compliance and adherence to the framework through self-assessments, third-party audits, and by implementing the recommended practices.

Who can perform NIST AI assessments?

NIST AI assessments can be performed by qualified internal teams, third-party auditors, or consultants with expertise in AI risk management and the NIST AI RMF. I.S. Partners offers a complete package of services to help organizations implement the AI RMF standards according to their industry requirements.

Get started

Get a quote today!

Fill out the form to schedule a free, 30-minute consultation with a senior-level compliance expert today!

Analysis of your compliance needs
Timeline, cost, and pricing breakdown
A strategy to keep pace with evolving regulations

Great companies think alike.

Join hundreds of other companies that trust I.S. Partners for their compliance, attestation and security needs.

Scroll to Top