Key Takeaways
1. AI Governance Is Evolving: NIST’s 2025 updates expand the framework to address generative AI, supply chain vulnerabilities, and new attack models.
2. Integration Is the Future: The AI RMF now aligns more closely with cybersecurity and privacy frameworks, simplifying cross-framework compliance.
3. Operationalize AI Risk Management: Organizations must move beyond policy to continuous monitoring, measurement, and improvement.
AI systems are spreading rapidly across enterprise, government, and critical infrastructure, making strong governance and risk management essential. The National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework (AI RMF) helps organizations address AI-related risks.
Since its initial release in January 2023, the NIST AI Risk Management Framework v1.0 has continually evolved to reflect emerging risks, concerns about trustworthiness, and regulatory expectations. For organizations adopting AI, it’s vital to understand what’s changed, why it matters, and how to align compliance and risk management practices.
Baseline: What NIST AI RMF 1.0 Established
Released in January 2023, NIST AI RMF 1.0 was developed in a transparent, consensus-driven process involving public comments, workshops and cross-sector input. Key components include:
- Cross-industry Model: A voluntary, risk-based framework designed to apply across all sectors and AI use cases.
- Trustworthiness Attributes: Valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair.
- Lifecycle Approach: Encourages risk assessment and mitigation from design through deployment and decommissioning.
- A Structure Built Around Four Core Functions:
- Govern: Establish governance structures, policies, and roles/responsibilities.
- Map: Identify system context, data, stakeholders, and dependencies.
- Measure: Monitor performance, trustworthiness, risks, and outcomes.
- Manage: Prioritize, mitigate, and continuously monitor AI risks, including third-party risks.
These concepts map to existing control frameworks used in IT audit and compliance, including SOC, PCI DSS, CMMC, and HITRUST.
What’s New in the 2025 Updates to NIST AI RMF?
While NIST hasn’t yet published a formal “AI RMF 2.0,” several important updates and related initiatives shape the 2025 landscape:
a. Expanded Taxonomy of AI Threats And Attacker Models
A March 2025 update introduces broader threat categories—such as poisoning attacks, evasion attacks, data extraction, and model manipulation—to address generative AI and LLM vulnerabilities. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2025.pdf
b. Integration with Other NIST Frameworks
NIST is actively aligning its AI RMF with the Cybersecurity Framework (CSF) and Privacy Framework, helping organizations unify governance and risk programs under one umbrella.
c. Introduction of Maturity Model Guidance
NIST is encouraging organizations to measure AI risk maturity and adopt continuous improvement processes. This approach parallels other maturity-based models in cybersecurity and compliance.
d. Focus on Generative AI Use Cases
The updates acknowledge the unique risks associated with generative AI systems, including hallucinations, data leakage, and synthetic content misuse, and provide guidance for managing them.
e. Stronger Supply Chain and Third-Party Risk Management
The March 2025 update emphasizes model provenance, data integrity, and third-party model assessment, recognizing that most organizations rely on external or open-source AI components.
Why These Changes Matter
For organizations operating in regulated sectors or those deploying AI in business-critical processes, these updates mean:
- Greater Regulatory Scrutiny: While the AI RMF is voluntary, federal agencies, regulators, and industry bodies increasingly reference the NIST AI RMF in their compliance and governance standards.
- Broader Audit Readiness: Risk assessments must now include AI-specific vulnerabilities and controls, including bias, explainability, and model vulnerabilities.
- Need for Governance Maturity: Operationalizing AI oversight can deliver compliance and competitive advantages, but AI governance must evolve to keep pace with risk.
- Third-Party/Vendor Risk Amplification: Third-party AI components introduce new risks requiring greater due diligence and ongoing monitoring, along with updated model-risk controls.
- Accelerated Risk Cycles: AI threats evolve quickly, demanding faster, more agile governance and response.
Aligning with the Updated Framework
Here’s our five-step, streamlined roadmap to empower you to align with the 2025 updates to the NIST AI Risk Management Framework:
Step 1: Inventory and Categorize AI Systems (Map)
- Build an AI Bill of Materials (AI-BOM) for all models, data, and vendors.
- Identify context, stakeholders, and dependencies for each AI system.
- Assess model risk from pre-trained or open-source models as well as third-party AI usage.
Step 2: Update Governance (Govern)
- Define clear AI risk ownership and roles.
- Update policies to address new threat types and model risks.
- Integrate AI risk into enterprise IT and compliance governance.
- Expand your risk taxonomy and due diligence protocols based on the new guidance.
Step 3: Establish Metrics and Monitoring (Measure)
- Define key metrics using the “Measure” core function of the AI RMF as a guide.
- Implement continuous monitoring and anomaly detection for AI systems.
- Expand controls to cover emerging threats, such as data poisoning detection and model extraction.
- Ensure audit trail transparency, including model changes, data versioning, and third-party assessments.
Step 4: Manage and Mitigate Risks (Manage)
- Prioritize risks by potential impact and exposure.
- Implement model hardening, human-in-the-loop controls, and incident response plans.
- Align mitigation efforts with SOC, PCI DSS, HITRUST, and CMMC frameworks.
- Document and monitor remediation effectiveness to enable continuous improvement.
Step 5: Continuous Improvement and Audit Readiness
- Integrate AI risk controls into existing audit programs.
- Use maturity models to benchmark progress.
- Use the AI RMF’s structure as a way to map to existing controls frameworks
- Continuously refine risk posture as NIST issues new profiles and updates.

The IS Partners Advantage
As organizations adopt more advanced AI systems—from predictive analytics to generative AI—the risk landscape is shifting quickly. The 2025 updates to the NIST AI Risk Management Framework signal that organizations must move from planning to operationalizing AI risk management.
IS Partners’ IT compliance and risk advisory experts help clients align with evolving standards while leveraging proven audit methodologies. Our streamlined audit model simplifies cross-framework alignment—transforming AI compliance into a catalyst for innovation and trust.
What Should You Do Next?
Map Your AI Footprint: Create an inventory of AI systems and dependencies to understand exposure and governance needs.
Modernize Governance: Update risk and compliance frameworks to incorporate AI-specific policies, roles, and threat categories.
Engage Expert Support: Partner with IS Partners to integrate NIST AI RMF practices into existing audit and compliance programs.








