Key Takeaways
1. The NIST AI RMF Is Evolving Toward Operational, Industry-Specific Guidance: While “NIST AI RMF 2.0” hasn’t been formally released, new profiles and implementation resources signal a shift toward more practical, sector-specific AI risk management.
2. Generative AI and Third-Party Risk Are Now Central to AI Governance: The latest updates emphasize managing risks like hallucinations, data leakage, and supply chain dependencies, making third-party oversight and model transparency critical.
3. AI Risk Management Requires Continuous, Maturity-Driven Processes: Organizations must move beyond static policies to adopt ongoing monitoring, measurable controls, and continuous improvement aligned with rapidly evolving AI threats.
AI systems continue to expand across enterprise, government, and critical infrastructure environments—making governance, transparency, and risk management more critical than ever. The NIST AI Risk Management Framework (AI RMF) remains the leading voluntary standard helping organizations manage AI risk in a structured, defensible way.
Since its initial release in January 2023, the framework has steadily evolved. Recent NIST AI Risk Management Framework updates from 2025 to 2026 reflect a shift from foundational guidance to more operational, sector-specific, and implementation-ready resources. While NIST has not formally released a document labeled “NIST AI RMF 2.0,” the latest guidance, profiles, and technical publications collectively represent a significant maturation of the framework.
For organizations deploying AI, understanding the NIST AI RMF latest version—and how to operationalize it—is essential for compliance, audit readiness, and risk resilience.
Baseline: What NIST AI RMF 1.0 Established
Released in January 2023, NIST AI RMF 1.0 introduced a flexible, voluntary framework designed to be applied across industries and AI use cases. It was built through a collaborative, cross-sector process and established several foundational principles:
- A risk-based, use-case-agnostic model applicable to all sectors
- Seven defined trustworthiness characteristics that evaluate whether an AI system is:
- Valid and reliable
- Accountable and transparent
- Safe
- Secure and resilient
- Explainable and interpretable
- Privacy-enhanced
- Fair
- A full AI lifecycle approach, from design through deployment and decommissioning
- A structure organized around four core functions:
- Govern: Establish oversight, accountability, and policies
- Map: Understand context, stakeholders, and system dependencies
- Measure: Evaluate performance, risks, and trustworthiness
- Manage: Prioritize and mitigate risks continuously
These functions align closely with existing compliance frameworks such as SOC 2, ISO 27001, HITRUST, PCI DSS, and CMMC—making the AI RMF a natural extension of existing governance programs.
What’s New in the NIST AI RMF (2025–2026 Updates)
Although there is no formal “NIST AI RMF 2.0,” the latest updates signal a clear evolution toward more prescriptive, use-case-driven guidance.
1. Emergence of AI RMF Profiles (Including Critical Infrastructure)
One of the most important developments is the introduction of AI RMF Profiles, which tailor the framework to specific sectors and risk environments.
NIST’s concept note for a “Trustworthy AI Profile for Critical Infrastructure” highlights:
- Sector-specific risk considerations (energy, healthcare, transportation, etc.)
- Emphasis on safety, resilience, and system reliability
- Integration with national security and operational continuity concerns
This signals a broader move toward contextualized AI risk management, where organizations are expected to adapt the framework to their industry-specific threat landscape.
2. Continued Expansion of AI Threat Taxonomy
Recent updates expand on AI-specific threats, particularly those associated with generative AI and large language models (LLMs), including:
- Data poisoning and adversarial manipulation
- Model extraction and inversion attacks
- Prompt injection and misuse scenarios
- Synthetic content risks and deepfakes
This evolution reflects the growing need to treat AI systems as attack surfaces, not just business tools.
3. Deeper Integration with Cybersecurity and Privacy Frameworks
In 2026, NIST has continued aligning its AI RMF with:
- The NIST Cybersecurity Framework (CSF 2.0)
- The NIST Privacy Framework
This convergence enables organizations to integrate AI governance into broader enterprise risk management programs, reducing duplication and improving audit efficiency.
4. Operationalization Through Implementation Resources
The NIST AI RMF latest version is increasingly supported by:
- Playbooks and implementation guides
- Use-case-specific profiles
- Measurement and evaluation methodologies
This marks a shift from conceptual guidance to practical execution, helping organizations embed AI risk management into daily operations.
5. Increased Focus on Generative AI Governance
Generative AI is now central to NIST’s guidance, with emphasis on:
- Managing hallucinations and output reliability
- Preventing sensitive data leakage
- Monitoring downstream use and misuse of generated content
- Establishing human oversight and validation mechanisms
Organizations are expected to implement controls specific to generative AI, not just general AI governance policies.
6. Strengthened Supply Chain and Third-Party Risk Management
The updated guidance reinforces the importance of:
- Model provenance and traceability
- Third-party model validation and due diligence
- Monitoring open-source and vendor-supplied AI components
As AI ecosystems grow more complex, third-party risk is now a primary concern, not a secondary one.

Why These NIST AI RMF Updates Matter
For organizations adopting AI—especially in regulated or high-risk environments—the 2025–2026 updates introduce several important implications:
- Greater Regulatory Alignment: Even as a voluntary framework, the AI RMF is increasingly referenced by regulators and policymakers, making it a de facto standard for AI governance.
- Expanded Audit Scope: Audits now extend beyond traditional IT controls to include model behavior and reliability, bias and fairness testing, explainability and transparency controls, and AI-specific security vulnerabilities.
- Shift Toward Maturity-Based Governance: Organizations are expected to move beyond ad hoc controls toward measurable AI risk maturity models.
- Higher Expectations for Third-Party Oversight: Vendors, APIs, and open-source models must be continuously assessed—not just vetted at onboarding.
- Faster Risk Cycles: AI risks evolve rapidly, requiring continuous monitoring and adaptive governance, not periodic reviews.
Aligning with the NIST AI RMF Latest Version: A 2026 Roadmap
To align with the latest NIST AI RMF guidance, organizations should adopt a structured, lifecycle-based approach:
Step 1: Inventory and Categorize AI Systems (Map)
- Develop an AI Bill of Materials (AI-BOM)
- Identify system context, dependencies, and stakeholders
- Classify AI systems by risk level and criticality
Step 2: Strengthen AI Governance (Govern)
- Define ownership for AI risk and compliance
- Update policies to reflect generative AI and emerging threats
- Align AI governance with enterprise risk and compliance frameworks
Step 3: Implement Measurement and Monitoring (Measure)
- Establish AI-specific KPIs and risk indicators
- Monitor for anomalies, drift, and adversarial behavior
- Maintain detailed audit logs for models, data, and outputs
Step 4: Mitigate and Manage AI Risk (Manage)
- Prioritize risks based on business impact
- Deploy controls such as:
- Human-in-the-loop validation
- Model hardening and testing
- Incident response playbooks for AI failures
- Map controls to frameworks like SOC 2, ISO 27001, HITRUST, and PCI DSS
Step 5: Adopt Continuous Improvement and AI RMF Profiles
- Leverage NIST AI RMF Profiles to tailor controls by industry
- Benchmark against AI maturity models
- Continuously refine controls as new NIST guidance emerges
The IS Partners Advantage
As AI adoption accelerates, organizations must move beyond theory and operationalize AI risk management. The latest NIST AI Risk Management Framework updates (2025–2026) make it clear that governance, monitoring, and accountability must be embedded into everyday business processes.
IS Partners helps organizations align with the NIST AI RMF latest version by integrating AI risk into existing compliance frameworks such as SOC 2, HITRUST, ISO 27001, and PCI DSS. Our audit-driven approach ensures that AI governance is not only compliant but measurable, scalable, and defensible.
By combining deep compliance expertise with practical implementation strategies, IS Partners enables organizations to transform AI risk management into a competitive advantage grounded in trust, transparency, and resilience.
What Should You Do Next?
Conduct an AI System Inventory and Risk Assessment: Identify all AI systems in use—including third-party and generative AI tools—and document their purpose, data sources, dependencies, and risk exposure.
Align AI Governance With NIST AI RMF and Existing Frameworks: Update policies, roles, and controls to reflect the latest NIST AI RMF updates (2025–2026), ensuring alignment with frameworks like SOC 2, ISO 27001, HITRUST, and PCI DSS.
Implement Continuous Monitoring and AI Risk Controls: Establish ongoing monitoring for model performance, security threats, and data integrity, while incorporating human oversight, audit logging, and incident response processes.








