Development of Additional Control Requirements To Address Emerging AI Risks
The existing 51 controls within HITRUST AI RMF already set a solid foundation for organizations managing AI risks. These controls align with ISO/IEC 23894, NIST AI RMF, and other global compliance frameworks to get an all-around approach to AI governance.
So, how do we ensure HITRUST AI RMF keeps pace?
- Adapting to emerging AI security threats. Cybercriminals are finding new ways to exploit AI-driven vulnerabilities. Future controls must focus on adversarial AI risks, model poisoning, and cybersecurity best practices.
- Stronger governance for generative AI. New controls should address accountability, liability, and ethical safeguards for these systems.
- Regulatory compliance in a fragmented AI. Global AI laws are evolving independently, meaning organizations need a flexible compliance framework that adapts to new regulations as they emerge.
Which additional control requirements can you combine with HITRUST AI RMF? Here are some examples:
- Adversarial Attack Resilience. AI models must undergo penetration testing for adversarial attacks, including evasion, model poisoning, and inference attacks.
- Model Hardening Controls. Implement defenses such as differential privacy, robust training techniques, and adversarial filtering to prevent unauthorized model manipulation.
- Secure AI Deployment Requirements. AI applications must adhere to role-based access control (RBAC) and zero-trust architecture (ZTA) for deployment environments.
- AI-04: Model Versioning and Change Control. Every AI model update must be logged, reviewed, and approved before deployment to ensure compliance and prevent unintended changes.
- AI-05: Automated Drift Detection and Alerting. AI models must be monitored for data and concept drift, and alerts are triggered when performance degrades beyond acceptable thresholds.
- AI Retirement and Decommissioning Policies. Organizations must define when AI models should be decommissioned and how to handle legacy AI systems securely.
- Explainability & Decision Traceability. AI models making critical decisions (e.g., in healthcare, finance, and hiring) must provide human-readable explanations for their outputs.
- AI Decision Logging and Auditing. Maintain detailed logs of AI decisions, training data sources, and justifications to allow auditability and regulatory compliance.
- User-Requested Explanation Rights. Implement functionality allowing end users to request explanations of how AI-driven decisions affect them.









