Overview of NIST AI RMF Webinar
The AI RMF Webinar, conducted on June 10, 2024, and led by Ian Terry, Director for Cybersecurity Services, and Jena Andrews, Senior Consultant at IS Partners, delivered a comprehensive overview of artificial intelligence and the NIST AI Risk Management Framework (RMF).
The webinar underscored the importance of safely and securely integrating AI into business operations. It opened with a deep dive into AI, its diverse applications, and the inherent challenges and risks of AI, such as malfunction, bias, and overdependence.
A thorough overview of the NIST AI RMF was provided, explaining its voluntary nature and potential regulatory implications. The characteristics of trustworthy AI systems were highlighted, focusing on safety, security, and transparency. The core functions of the AI RMF were detailed through a practical example involving a hypothetical company’s adoption of the framework.
The webinar offered actionable steps and best practices for businesses, including conducting an initial AI risk assessment and developing policies for the fair use of AI systems. This empowers organizations to navigate the complexities of AI integration with confidence and accountability.
Below, we highlight the critical ideas discussed in the webinar.
AI at Its Core
The speakers of the AI RMF Webinar provided a comprehensive exploration of artificial intelligence, focusing on its integration into businesses and the associated challenges. Jena Andrews began by describing AI through the NIST AI Risk Management Framework, which defines AI as “an engineered or machine-based system capable of generating outputs such as predictions, recommendations, or decisions that influence real or virtual environments.”
They simplified this by using a layman’s definition via ChatGPT, explaining that AI is a branch of computer science that aims to create machines that can perform tasks typically requiring human intelligence.
Integrating AI into today’s business world was a key focus, illustrating how AI is used to identify patterns and derive relationships, recognizing differences in data more effectively than humans.
The Emergence of New Risks
The adoption of AI is not without its challenges. There are apprehensions and ethical concerns about integrating AI into the workplace, primarily due to the vast data these solutions use. Questions were raised about whether AI systems are properly configured, managed, and monitored to avoid cybersecurity attacks. The importance of implementing and incorporating AI safely and securely was emphasized.
The speakers also highlighted some of the key risks associated with the rise of AI. In particular, the speakers discussed the root causes of businesses’ general considerations about using AI.
The most notable risks and potential impacts listed in the discussion are as follows.
- Malfunction risks and inability to address problems
- Biases in judgment-making based on AI algorithm
- Over-reliance on AI use
- Cybersecurity concerns and appropriate monitoring
- General security of sensitive information
- Ethical challenges and security issues
- Impact on Civil Liberties
- Supply chain risks due to vulnerabilities
To resolve these risks and potentially negative impacts, the NIST organization developed a new framework that sets a standard for responsible AI use.
What Is the NIST AI RMF?
Terry and Andrews collectively described the NIST AI RMF (Artificial Intelligence Risk Management Framework) as a voluntary framework developed by the National Institute of Standards and Technology (NIST) to guide organizations in managing risks associated with the use of AI technologies.
The framework is a newly released special publication designed to address the risks and challenges associated with AI technologies, specifically referred to as the NIST AI RMF 1.0 in the webinar.
The NIST AI standards take a structured risk management approach and incorporate AI-specific elements into it. It considers various aspects of AI, including ethical concerns, cybersecurity, and the need for trustworthy AI systems.
This RMF, as it exists today, is kind of like the standard of best practice that I would anticipate in the future is going to become the actual mandatory standard, certainly for the public sector and likely to expand into the private sector as well… There’s certainly going to be an extension into that as it relates to AI; and I would say AI RMF, if that isn’t the standard itself, is going to be a critical part of it.
The primary purpose of the framework is to help organizations identify, assess, and manage risks associated with AI systems. It aims to ensure that AI is implemented and incorporated safely and securely, addressing ethical concerns and cybersecurity risks.
What Are the Core Components of the NIST AI RMF?
The framework provides guidelines for better understanding and managing AI risks, ensuring that AI systems are trustworthy, secure, and resilient.
Ian Terry explained that the NIST AI RMF is divided into four core components or functions: govern, map, measure, and manage.
These functions help organizations implement the framework in a structured manner. They address different aspects of AI risk management, from governance and accountability to mapping AI use within the organization, measuring AI risks, and managing those risks effectively.
The extensiveness of the application depends on the company’s structure, risk tolerance, requirements, and resources.
Govern
The “Govern” function defines roles and responsibilities for managing AI risks. It establishes accountability and ensures clear guidelines on key AI actors for different aspects of AI risk management.
Ian Terry emphasized the questions, “Who’s responsible for this part of the AI?” and “Who do we turn to to make sure that we’re complying with our contractual, or in the future, regulatory obligations associated with AI governance?” emphasizing the need for clarity in accountability and governance.
Map
The “Map” function is about understanding where AI is used within the organization. This includes identifying business processes, systems, and supply chains that rely on AI.
Measure
The “Measure” function involves identifying how AI risks are being tracked and monitored. This AI RMF core includes ensuring that trustworthy AI characteristics are being implemented and assessing the efficiency of these measures. The speakers explained,
"It’s really getting down to the intricacies of how AI risks are being identified, how these system solutions are being monitored, tracking these metrics, and making sure that those trustworthy AI characteristics… making sure that these things again are being understood and implemented, and then, of course, assessing the efficiency of how these things are being measured."
Manage
The “Manage” function ensures that resources are allocated appropriately and that AI systems are consistently monitored post-implementation. It involves ongoing oversight to ensure that AI systems continue to function as intended and that any issues are promptly addressed. Andrews noted,
It also touches on making sure resources are being allocated appropriately, and making sure any sort of post-implementation, post-deployment of AI systems are consistently being monitored."
How Does NIST AI RMF Align and Contrast with Other NIST Guidelines?
The NIST AI Risk Management Framework (RMF) aligns with other NIST guidelines but also introduces unique considerations for AI systems. From the high-level discussion, the following alignments and contrasts were derived.
Alignment with Other NIST Guidelines
- Risk Management Focus. The AI RMF, like NIST SP 800-53 and SP 800-171, emphasizes a structured approach to risk management. It integrates AI-specific elements into a standard risk management framework.
- Governance and Compliance. The AI RMF, similar to the NIST Cybersecurity Framework, includes governance to ensure policies, procedures, and accountability mechanisms are in place for compliance with regulatory and contractual obligations.
- Security and Data Privacy. The AI RMF aligns with NIST’s mission to improve systems’ cybersecurity posture by incorporating security and privacy considerations specific to AI.
- Lifecycle Approach. The AI RMF and other NIST guidelines adopt an AI lifecycle approach to managing risks, from design and development to deployment and monitoring.
Differences Unique to NIST AI RMF
- Focus on AI-Specific Risks. The AI RMF addresses risks unique to AI systems, such as bias, fairness, explainability, and ethical concerns, in more depth than other NIST guidelines.
- Trustworthiness of AI Systems. The AI RMF introduces the concept of a “trustworthy AI system,” focusing on attributes like security, resilience, and explainability to ensure AI systems can be trusted by users and stakeholders.
- Ethical Considerations. Ethical concerns, including societal impacts of AI systems, are a significant aspect of the AI RMF, which is less emphasized in other NIST frameworks.
- Detailed Functions for AI Risk Management. The AI RMF outlines specific functions like govern, map, measure, and manage, tailored to AI systems, providing detailed guidance on implementing each function effectively.
Managing AI-Related Risks with a Structured Framework
The NIST AI RMF provides a structured approach to managing AI-related risks by introducing a comprehensive set of guidelines and practices. Here are the key ways in which the NIST AI RMF helps in managing these risks, as discussed in the webinar:
Framework Structure
The NIST AI RMF breaks down the process of managing AI risks into four core functions: Govern, Map, Measure, and Manage. These functions provide a high-level approach to addressing AI risks and ensuring that AI systems are implemented and managed responsibly.
Ethical and Security Considerations
The framework emphasizes the importance of addressing ethical concerns and ensuring the security and resilience of AI systems. This includes securing AI systems against cybersecurity attacks and ensuring their integrity and availability.
Trustworthy AI
The NIST AI RMF focuses on developing and utilizing AI systems that are considered trustworthy, addressing concerns related to ethics, security, and transparency. Organizations are encouraged to develop AI systems with these attributes to ensure they are reliable and align with societal values.
Incident Response Planning
Incident response planning prepares organizations to respond to AI-related incidents and mitigate potential risks. This includes planning for AI dependencies in incident response exercises and ensuring readiness for any AI-related situations or incidents.
Acknowledgments
Special thanks to Jena Andrews and Ian Terry for providing a comprehensive yet clear discussion of the apprehensions surrounding Artificial Intelligence and how the NIST AI RMF resolves these issues.