Overview of the AI Management Webinar

The AI Without Guardrails: Why Ignoring Compliance Could Sink Your Business webinar, held on October 30, 2024, was presented by Ian Terry, Principal Cybersecurity Specialist at I.S. Partners, alongside guest speaker Cole Medin, a generative AI consultant. Together, they provided an in-depth discussion on the role of AI in business and the critical importance of maintaining compliance to prevent security vulnerabilities and business risks.

This session began by exploring the potential security risks associated with AI in the workplace, including data vulnerabilities and exposure to unauthorized access. Terry and Medin also highlighted the growing prevalence of compliance frameworks that help mitigate these risks. They emphasized the role of standards like the NIST AI RMF and ISO 42001 in guiding companies toward secure AI integration, particularly when handling sensitive information.

Medin offered expert insights into real-world AI applications, from knowledge management systems to large-scale AI deployments. He discussed the risks and rewards of AI integration, noting the importance of secure infrastructure for companies handling proprietary data. Ian Terry provided guidance on balancing security measures with the benefits of AI, underscoring the need for companies to evaluate their unique risk profiles.

The session concluded with practical steps for businesses to safeguard AI deployments, including conducting initial risk assessments, implementing acceptable use policies, and determining when a local deployment of AI infrastructure is preferable. These measures equip organizations to leverage AI effectively while protecting sensitive data and adhering to regulatory expectations.

AI Security Risks and AI as a Threat Vector for Cybersecurity

The webinar highlighted critical security risks associated with adopting AI, with both Ian Terry and Cole Medin emphasizing how AI can introduce vulnerabilities within a company’s infrastructure and even serve as a tool for threat actors. 

As Terry put it, the world of AI can feel like “a wild west,” where security controls lag behind the technology’s rapid development, and conventional cybersecurity principles like multi-factor authentication (MFA) and phishing awareness are not always adapted to AI’s unique landscape.

The risks discussed fall into two main areas: data privacy and code execution vulnerabilities

Each poses distinct challenges, especially when combined with AI’s expansive data-processing abilities. Moreover, threat actors have begun leveraging AI to enhance their attack strategies, a phenomenon Terry noted as reminiscent of early hacking days but amplified through AI’s power.

The discussion highlighted the following risks brought about by the emergence of AI in the industry.

Data Privacy and Transmission Risks

Many companies use AI for knowledge management, creating databases from internal documents, revenue reports, and sensitive information. As Medin noted, however, “anything that goes over the Internet can be intercepted.” 

This introduces considerable risk, as large datasets are often sent across networks, exposing them to potential breaches. Medin explained that regulatory requirements prevent many businesses from sending sensitive data through popular large language models like GPT or Claude due to concerns about data leakage.

Code Execution Vulnerabilities

Modern AI systems can execute code, making them vulnerable to remote code execution (RCE) attacks. Medin warned that threat actors could “submit malicious code that, when it runs on your software, would pull your database credentials and send them back to them.” 

This means AI systems must implement strict controls on the code they accept to avoid unintentionally running dangerous commands.

Jailbreaking Models

Terry and Medin discussed the phenomenon of “jailbreaking” AI models, wherein attackers manipulate AI to bypass security protocols. The discussion surmised that large language models can be tricked into engaging in actions they would otherwise be programmed to avoid by “reframing the context” of the query, leading AI into providing harmful information. 

Medin illustrated this with an example where, by simulating a chemistry class, users could coax AI into generating dangerous instructions.

Compliance Frameworks for AI

Adopting the right compliance frameworks is crucial for businesses seeking to leverage AI securely and responsibly. Frameworks such as the NIST AI RMF and ISO 42001 provide essential guidelines that help organizations assess and manage risks associated with AI, striking a balance between innovation and security. 

As Ian Terry pointed out, these frameworks serve as “a guiding light for risk-based AI adoption,” equipping companies with adaptable tools to mitigate AI-related vulnerabilities while maintaining operational flexibility.

NIST AI RMF (AI Risk Management Framework)

The NIST AI RMF, developed by the National Institute of Standards and Technology, encourages organizations to identify, categorize, and address AI-related risks. Although it doesn’t mandate specific controls, the framework’s flexibility allows companies to tailor their approach according to their unique operational needs. 

Terry noted that the framework builds on NIST’s established legacy, explaining how it “helps organizations mitigate AI-related risks without locking them into rigid controls.”

Background

Dive Deeper!

Learn more about the NIST AI RMF from I.S. Partners’ previous webinar about the framework!

Watch Webinar Here

ISO 42001

For companies with international reach, ISO 42001 offers prescriptive guidelines that facilitate safe and compliant AI integration on a global scale. This standard is particularly valuable for organizations that need consistent security protocols across borders. 

Adopting ISO 42001 can also enhance a company’s credibility, demonstrating to partners and clients a commitment to secure, standardized AI practices—a move he described as “a competitive advantage” in today’s AI-driven business industry.

While these frameworks are currently voluntary, they may soon serve as the foundation for future AI regulations, particularly in industries handling sensitive public data. Terry speculated that compliance mandates similar to the Federal Information Security Management Act (FISMA) could emerge within the next few years, making early adoption a wise strategy for companies looking to stay ahead of regulatory shifts.

Compliance questions? Get answers!

Book a free 30-minute consultation with a specialist to find your path to compliance. Secure your spot today.

SPEAK TO AN EXPERT

Navigating Compliance Challenges

While AI compliance frameworks like NIST AI RMF and SOC 2 Type II offer essential guidelines, implementing these standards often presents significant challenges. Ian Terry and Cole Medin highlighted several common obstacles that organizations face when striving to align their AI systems with these frameworks.

Below, we collected these challenges to help you identify them clearly.

Lack of Awareness and Understanding 

Many companies acknowledge the importance of AI security but lack a deep understanding of compliance frameworks. Medin noted that most companies “know they have to be secure, but they don’t know what goes into making AI compliant.” 

This knowledge gap makes it challenging for organizations to design and implement effective AI security policies.

Balancing Security with Accessibility

AI’s power comes from accessing large datasets, but integrating security controls without compromising functionality is difficult. “Companies want the full features of AI but face risks when sensitive data goes into external models,” Terry explained. 

This tension complicates compliance, particularly for organizations handling proprietary or sensitive data.

Data Privacy Management in AI Systems 

For companies managing private data, the challenge is even greater. Medin shared that, increasingly, clients are considering running AI systems locally to avoid sending sensitive information over the internet. This approach can reduce compliance risk but requires significant infrastructure and resources.

High Cost of Compliance Implementation

Many businesses, particularly smaller ones, struggle with the financial and operational resources required to implement and monitor AI compliance effectively. Terry emphasized that frameworks like SOC 2 Type II offer a “competitive advantage” but acknowledged that the cost of certification can be prohibitive for less mature firms.

Evolving Regulatory Landscape 

With regulatory standards still in flux, it’s challenging for businesses to stay compliant while keeping up with rapidly advancing AI capabilities. Terry suggested that it’s “likely we’ll see something comparable to FISMA for AI,” meaning that companies may face even stricter compliance requirements in the near future.

These challenges highlight the importance of independent auditing in AI compliance. As Terry noted, assessments such as SOC 2 Type II provide an added “degree of assurance” that companies are safeguarding sensitive data and adhering to compliance standards. 

This external validation helps organizations navigate the complexities of AI compliance with confidence and accountability. Learn more about these assessments from our experts.

The Future of AI Security from the Experts

In the final part of the webinar, Ian Terry and Cole Medin explored what the future holds for AI security, noting several emerging trends and anticipated regulatory shifts that are likely to shape the industry.

Peek into the insights of these experts to envision the future of AI—a landscape where explainable models, enhanced regulatory standards, and robust monitoring will empower organizations to harness AI safely and transparently.

Growing Importance of Compliance and Potential New Regulations

As AI becomes more integrated into sensitive areas like government contracting and public data, regulatory mandates similar to FISMA may emerge. Terry pointed out that frameworks like the NIST AI RMF could become foundational for these future laws, indicating that “early adopters of these frameworks will be better prepared” when stricter regulations arrive.

Explainable AI 

Medin discussed the concept of explainable AI as a major development in enhancing AI security and accountability. Currently, AI systems, especially large language models, are known for their “black-box” nature, making it difficult to trace how an output was derived. 

Explainable AI would allow companies to understand and even audit AI decision-making, reducing the risk of unexpected or harmful outputs.

AI Monitoring and Oversight 

Emerging tools like Langsmith and Langfuse enable organizations to monitor interactions between multiple AI agents or “agents” within a system. 

This approach is especially valuable for companies with complex AI infrastructures, allowing them to observe and document each agent’s decisions and actions. 

Monitoring tools add an extra layer of security by helping organizations stay informed on how AI systems function over time.

Increasing Adoption of Hybrid AI Models 

As businesses weigh security concerns, Medin noted a trend toward hybrid AI deployments, where sensitive data is handled locally rather than in third-party cloud environments. Companies might use secure, locally-hosted AI models for private data processing while still leveraging cloud-based AI for public-facing or less sensitive applications.

Acknowledgments

Special thanks to Ian Terry and Cole Medin for sharing their expertise on AI security challenges and the essential role of compliance frameworks.

Ian Terry
Ian Terry
Principal Cybersecurity Specialist
With nearly a decade of experience, Ian specializes in cybersecurity and compliance at I.S. Partners, guiding organizations in secure AI integration through frameworks like NIST and SOC 2.
View More
Cole Medin
Cole Medin
Generative AI Consultant
A seasoned AI consultant and former cloud engineer, Cole helps organizations harness the potential of generative AI while prioritizing security and compliance.
View More

Get started

Get a quote today!

Fill out the form to schedule a free, 30-minute consultation with a senior-level compliance expert today!

Analysis of your compliance needs
Timeline, cost, and pricing breakdown
A strategy to keep pace with evolving regulations

Great companies think alike.

Join hundreds of other companies that trust I.S. Partners for their compliance, attestation and security needs.

paymedia-logo-1richmond-day-logopresort logodentaquest-4AGM logovrs-veraclaim-logo
Scroll to Top