Key Takeaways
1. California’s SB 1047, a bill under review, targets high-cost, high-power AI models with stringent safeguards.
2. The EU AI Act (EU Artificial Intelligence Act) uses a systemic risk-based framework, applying lighter rules for low-risk AI.
3. IS Partners offer support across AI management frameworks, like NIST AI RMF.
California’s SB 1047 vs The EU AI Act: A Detailed Summary
California’s SB 1047 unpassed bill aims to bring responsible development and deployment of AI into industries statewide. It mandates AI transparency requirements, accountability, and ethical standards and addresses its integration in sensitive areas like healthcare, employment, and criminal justice.
This bill reflects a growing emphasis on protecting the public from unintended consequences, setting a foundation to ensure AI technologies are used in ways that respect human rights and public trust.
Some of the key highlights of SB 1047 you need to be aware of are:
- Preventing Critical Harm. Developers must implement measures to prevent models from causing harm.
- Kill Switch Requirement. Developers need to establish “shutdown capabilities” to halt operations of covered models when necessary.
- Cybersecurity Measures. Strong protections are required to guard against unauthorized access or unsafe modifications.
- Safety Protocols. A detailed Safety and Security Protocol (SSP) must be prepared, reviewed annually, and shared with the Attorney General when requested.
- Rigorous Oversight. Developers are subject to annual independent audits, mandatory compliance reporting, and incident reporting within 72 hours of safety breaches.
In contrast, the implementation of the AI Act sets regulations for AI across the European Union using a risk-based framework. This approach assigns varying regulatory sandboxes based on an AI system’s significant risks to society.
Higher-risk applications, such as those involving biometric personal data or law enforcement, face stricter requirements, while lower-risk applications are subject to lighter standards.
(The EU AI Act’s effectivity date was 1st August 2024, whereas the California AI regulation bill was proposed on August 28, 2024)
The proposed bill SB 1047 and the EU AI Act aim to set industry benchmarks for ethical AI, but with different levels of strictness and focus areas that cater to their respective jurisdictions.
Update: On September 29, 2024, Governor Newsom vetoed SB 1047, returning it to the legislature without signing. In his veto statement, he criticized the bill as “a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities.” He reiterated earlier concerns, noting that SB 1047 focuses on regulating models based on their cost and size rather than their actual function in the real world.
Overview of Differences Between EU AI Act vs California’s SB 1047
| Parameter | EU AI Act | SB 1047 |
|---|---|---|
| 1. Scope | It applies to AI providers and deployers in the EU commission or impacts the EU market, regardless of location. | Targets AI models over $100 million in development costs and 10^26 FLOPS; developers are responsible for model use and modification. |
| 2. Objectives | EU AI Act prioritizes safety, and fundamental rights, reducing burdens for SMEs, fostering investment, protecting democracy, and positioning Europe as a global AI leader. | SB 1047 focuses on ethical AI development, fostering innovation, workforce training, addressing bias, improving state operations, and ensuring accountability. |
| 3. Application of Standard | Applies to all AI providers and deployers within or impacting the European Parliament and the market. | It focuses on large-scale models with high potential for harm and targets models exceeding set costs and compute thresholds. |
| 4. Process of Compliance | Key steps: identify AI systems, classify risk levels, assemble governance team, and technical documentation of high-risk AI. | Key steps: risk assessment, safety protocols, transparency obligations, incident reporting, and third-party audits. |
| 5. Risk and Safety Approach | Categorizes AI systems into unacceptable (prohibited), high-risk (regulated), limited risk (transparency required), and minimal risk (codes of practices encouraged). | Targets large models with critical harm potential, like AI used in weapons or cyberattacks. |
| 6. Repercussions For Breaches | Fines up to €35 million or 7% of global turnover for breaches, depending on severity. | Enforced by the Attorney General, penalties of up to 10% of computing costs for violations. |
The EU AI Act vs SB 1047: Key Differences And Contrasts
California’s SB 1047 (still a proposed bill) and the EU AI Act serve as significant regulatory frameworks, each addressing distinct focus areas.
Understanding their nuances can help you determine which standards align with their AI strategies and operational needs.
Below, we further dissect the difference between the two AI programs based on different parameters.
- Scope
- Application of Standard
- Objectives
- Process of Compliance
- Risk and Safety Approach
- Penalties and Violations for Breaches
Scope
EU AI Act
The EU AI Act has an extraterritorial scope. It applies to any provider bringing an AI system or general-purpose AI model (GPAI system) to the EU market, whether or not the provider is based in the EU.
This wide-reaching application ensures that any AI used within the EU, regardless of origin, adheres to the EU’s stringent regulatory standards.
SB 1047
SB 1047 is a state-specific bill that targets AI models exceeding specific computational power and development cost thresholds. Under this bill, developers operating in California are held legally accountable for how their models are used or modified later.
Before beginning model training, developers must certify that their models won’t enable or support “hazardous capabilities.” They’re also required to implement a comprehensive set of safeguards to prevent harmful applications of their AI systems.
Objectives
EU AI Act
The EU AI Act’s objectives are to:
- Ensure safety and fundamental rights. Guarantee that AI systems respect safety, ethical principles, and fundamental rights.
- Reduce administrative and financial burdens. Lessen the administrative and financial challenges for businesses, particularly small and medium-sized enterprises (SMEs).
- Encourage AI investment. Promote AI investment, enhance governance, and create a unified EU market for AI.
- Protect democracy. Safeguard democratic processes, public services, the rule of law, and environmental well-being.
- Position the European Commission as a leader in AI. Establish Europe as a global standard in AI development and regulation.
SB 1047
California SB 1047 focuses on promoting ethical and innovative AI while ensuring public trust. Its goals include:
- Establish ethical guidelines for responsible AI development.
- Encourage innovation through partnerships and research.
- Prepare the workforce for AI careers with training programs.
- Address bias in AI systems to ensure fairness.
- Use AI in state operations to improve efficiency and services.
- Implement accountability measures for oversight and trust.
Application of Standard
EU AI Act
The EU AI regulations apply to all providers and deployers of AI systems within the EU and even those outside the EU if their AI system impacts the EU market. They cover many AI applications with levels of risk, from low-risk to high-risk systems.
- AI Providers. Any organization that supplies AI systems, no matter where they’re based.
- Deployers within the EU. Companies or individuals deploying AI systems inside the EU.
- Non-EU Providers and Deployers. Those outside the EU who provide or deploy trustworthy AI systems intended for use within the EU.
- Importers, Distributors, and Manufacturers. Anyone bringing AI systems into the EU market, including importers, distributors, and manufacturers.
Essentially, if an AI system is used, sold, or even designed to operate in the EU, it falls under the Act’s regulatory reach.
SB 1047
SB 1047 primarily targets AI systems developed or deployed in California, with a focus on models that pose a high potential for “critical harm” to users and society. It applies mostly to large-scale AI developers or those with significant computational resources.
Specifically, these are models that:
- Exceed $100 million in development costs and are trained with computing power over 1026 FLOPs (floating-point operations) or
- Are derivatives of such models, costing over $10 million to fine-tune and requiring computing power greater than 3 x 10^25 FLOPs.
Although current AI models are just under these thresholds, new models will likely qualify. The bill also covers “derivatives,” or copies, of these large models, even if slightly modified.
SB 1047 mainly impacts companies that develop these powerful AI models or provide the computing power to train them. Companies that simply use these models aren’t subject to the same regulations.
Also, companies that run large data centers (computing clusters) must follow certain new rules if their clients use enough resources to train a covered model.
Process of Compliance
EU AI Act
The EU AI Act imposes strict compliance processes and requirements. This system makes the Act a notable framework for businesses looking into developing AI.
The risk classification outlined in the Act can help enterprises think about the AI products they use and understand their associated risks. Here are the main steps to get compliant:
- Identify AI Systems. List all AI technologies used in your business, noting high-impact areas (e.g., customer data, hiring).
- Understand Risk Classifications. Classify each AI system’s risk level based on EU AI Act guidelines, prioritize high-risk category systems for compliance checks.
- Assess Compliance Needs. Determine your role (provider, deployer, etc.) and review Act requirements for each category, especially high-impact capabilities of AI.
- Form an AI Governance Team. Assemble a cross-departmental team to manage AI compliance and other specific obligations.
- Document Thoroughly. Maintain detailed technical records for all high-risk AI systems.
- Use Expert’s Help. Consider an AI data governance platform and expert support to streamline compliance efforts.
SB 1047
Below are the possible steps one has to take if the bill comes to pass and takes effect.
- Conduct Risk Assessment. Evaluate each AI model for potential critical harm before release. Document the assessment process and reasons for risk determinations.
- Implement Product Safety Protocols. Add safeguards like kill switches to manage risks. Also, make sure to set up clear procedures for handling serious incidents.
- Ensure Transparency. Keep detailed records on AI development, testing, and deployment.
- Report Incidents. Report safety incidents to the California Attorney General within 72 hours. Define what qualifies as a “safety incident” based on SB 1047’s criteria.
- Schedule Third-Party Audits. Undergo regular independent audits to verify SB 1047 compliance and publish redacted audit reports for public accountability.
- Address Ethical Concerns. Integrate ethical principles into AI development to reduce bias and promote fairness.
Both SB 1047 and the EU AI Act emphasize the critical need for ethical and responsible AI development, aligning closely with the principles outlined in the NIST AI Risk Management Framework (NIST AI RMF). This framework serves as a leading standard in the U.S. for managing AI risks, focusing on trustworthiness, transparency, and security throughout the AI lifecycle.
IS Partners is uniquely positioned to help organizations implement the NIST AI RMF effectively. Our expertise in risk assessments, regulatory compliance, and governance ensures a seamless integration of the framework into your processes.
Contact us today to ensure your AI initiatives meet the highest standards of responsibility and compliance.
Risk and Safety Approach
EU AI ACT
The EU AI Act categorizes AI systems based on their risk level, with four main classifications. High-risk systems are subject to stricter regulations, especially in healthcare, transportation, and recruitment. The categories are:
- Unacceptable Risk. This highest level covers AI applications that threaten EU values and fundamental rights. Such systems will be prohibited in the EU.
- High-Risk AI Systems. These are tightly regulated and include safety components in regulated products (like medical devices) and stand-alone systems in sectors like healthcare and law enforcement.
- Limited Risk. AI systems that may manipulate or deceive users. These require transparency, and users must be informed when interacting with them.
- Minimal Risk. These systems have no significant impact and do not require mandatory regulations, but companies are encouraged to follow principles like human oversight and non-discrimination.
SB 1047
SB 1047 is focused on regulating large-scale AI models, specifically those that cost over $100 million to develop and require immense computational power (10^26 FLOPS).
The Bill is designed to prevent “critical harms.” Hence, it already focuses on high-risk systems. It includes the potential use of AI in creating weapons or launching large-scale cyberattacks. The goal here is clear: ensure that powerful AI systems are developed with proper safety measures in place to avoid catastrophic consequences.
Penalties for Violation and Breaches
EU AI ACT
The EU AI Act is much stricter than California’s AI bill, with clearly defined repercussions for non-compliance.
If a company breaches the Act, fines can reach up to €35 million, or 7% of their global annual turnover, whichever is higher.
SB 1047
If a violation of SB 1047 California AI occurs, enforcement falls solely to the Attorney General, as there’s no private right of action. The Attorney General can take legal action where AI violations result in serious consequences—such as death, bodily harm, theft, property damage, or threats to public safety.
The Attorney General can pursue civil penalties and monetary damages (including punitive ones) and even seek injunctive or declaratory relief.
The fines for violations are calculated based on the cost of computing power used to train the AI model, with specific thresholds in place. Fines can reach up to 10% of the compute resource cost, with penalties determined by the harm caused by the AI model’s deployment.
Key Government Concerns About California’s SB 1047 vs. the EU AI Ac
On September 29, 2024, Governor Newsom vetoed SB 1047, returning it to the legislature without his signature. His primary concerns included:
- Lack of Empirical Analysis – The bill was not backed by sufficient data on AI systems and their real-world capabilities.
- Regulation Based on Cost & Size – Instead of focusing on actual risks, the bill targeted AI models based on development costs and computational power.
- Impact on AI Innovation – Critics argued that restrictive regulations could stifle innovation and slow AI advancements.
- Threat to Open-Source Development – The bill’s shutdown requirements raised concerns that it could discourage open-source AI models, making developers hesitant to build on them.
The debate over SB 1047 reflects a deeper divide in AI regulation strategies. While some fear it overreaches and stifles innovation, others, including AI company Anthropic, argue it is a necessary safeguard against AI misuse, particularly in biodefense and cybersecurity.
In contrast, the EU AI Act takes a broader, risk-based approach, applying to all AI providers and deployers affecting the EU market. Instead of blanket shutdown requirements, it categorizes AI risk levels, allowing for stricter oversight where needed while still fostering innovation, data privacy, and accountability.
How Can IS Partners Help Future-proof Your Business with AI Compliance?
As AI regulations evolve, businesses must navigate increasingly complex compliance requirements. The EU AI Act enforces a risk-based classification, demanding transparency and accountability at every level. Meanwhile, California’s SB 1047, despite being vetoed, introduced aggressive safeguards for high-powered AI models, reflecting growing concerns over AI safety and accountability. These trends signal a future where AI governance will only tighten, making proactive compliance essential.
IS Partners specializes in AI risk management and regulatory compliance, ensuring your AI systems align with emerging laws and industry standards. Whether you need to comply with NIST AI RMF, HITRUST AI RMF, or the EU AI Act, our expertise enables you to:
- Conduct AI risk assessments to identify potential compliance gaps.
- Develop governance frameworks to align with evolving regulations.
- Automate compliance documentation to streamline audits and regulatory reviews.
What Should You Do Next?
To stay compliant, you should follow these steps:
Assess Your AI Risk Exposure. Identify if your AI models fall under high-risk categories in the EU AI Act or upcoming U.S. regulations.
Strengthen Compliance Measures. Implement governance frameworks that integrate best practices from NIST AI RMF and other global standards.
Collaborate with Trusted Auditors. Stay ahead of new frameworks and regulations by utilizing IS Partners’ expertise on AI and cybersecurity compliance.
AI regulations are tightening fast—businesses that act now will avoid costly fines, reputational damage, and operational setbacks. Partner with IS Partners today to build a resilient, compliant, and future-proof AI strategy. Contact us now to get started!








