Skip to content
Longterm Wiki
Back

California AI bill becomes a lightning rod—for safety advocates and developers alike | Center for Security and Emerging Technology

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: CSET Georgetown

Relevant to wiki users tracking AI governance developments; SB 1047 was a high-profile 2024 California legislative effort that became a flashpoint in debates over how to regulate frontier AI models, ultimately vetoed by Governor Newsom in September 2024.

Metadata

Importance: 52/100news articlenews

Summary

This CSET article examines California's SB 1047, a landmark AI safety bill that sparked intense debate between AI safety advocates who supported its liability and safety requirements and tech industry developers who opposed it as overly burdensome. The piece analyzes the competing arguments and political dynamics that made the bill highly controversial before Governor Newsom ultimately vetoed it.

Key Points

  • SB 1047 would have imposed safety requirements and liability on developers of large AI models trained above certain compute thresholds in California.
  • Safety advocates supported the bill as a necessary step toward accountability for frontier AI risks, while developers argued it would stifle innovation and drive companies out of California.
  • The bill created unusual political coalitions, dividing even the AI safety community over whether it was the right regulatory approach.
  • CSET provides neutral analysis of the stakeholder arguments, framing it as a test case for how jurisdictions might regulate frontier AI development.
  • The controversy highlighted fundamental tensions between precautionary AI governance and maintaining a permissive environment for AI development.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 9, 20265 KB
California AI bill becomes a lightning rod—for safety advocates and developers alike | Center for Security and Emerging Technology 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
 

 
 
 

 
 

 
 
 
 
 
 
 
 
 Skip to main content 
 
 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 In The News 

 California AI bill becomes a lightning rod—for safety advocates and developers alike

 
 
 Bulletin of Atomic Scientists

 
 June 17, 2024 
 
 In his op-ed featured in the Bulletin of the Atomic Scientists, Owen J. Daniels provides his expert analysis of California’s latest AI Bill, SB 1047.

 Read the Op-Ed 
 
 

 

 
 
 
 
 CSET Marshall Fellow, Owen J. Daniels, shared his expert analysis in an op-ed published by the Bulletin of the Atomic Scientists. In his piece, he discusses California’s latest AI Bill, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act ( also known as SB 1047 ), which aims to regulate the development and deployment of advanced AI models to prevent misuse and ensure safety.

 
 The bill targets large-scale AI models trained with significant computing power (>10^26 FLOPS) and high costs (>$100M). 

 It requires developers to implement safety measures and report incidents to prevent critical harms. 

 Supporters see it as a necessary step towards AI safety, while critics worry about stifling innovation. 

 The bill highlights the challenges of transitioning from voluntary to mandatory AI regulation. 

 It raises questions about balancing innovation with responsible AI development. 

 

 As AI continues to advance rapidly, how do we strike the right balance between fostering innovation and ensuring public safety?

 To read the full piece, visit the Bulletin of the Atomic Scientists .

 

 
 
 
 
 
 Author

 Owen Daniels 
 

 
 
 
 Original Publisher

 Bulletin of Atomic Scientists

 Originally Published

 June 17, 2024

 
 

 
 
 
 
 Topics

 
 Assessment
 

 
 
 

 

 Related Content

 
 
 Previous 
 Next 
 
 
 
 
 
 
 Blog 
 Open Foundation Models: Implications of Contemporary Artificial Intelligence

 
 March 2024 
 
 This blog post assesses how different priorities can change the risk-benefit calculus of open foundation models, and provides divergent answers to the question of “given current AI capabilities, what might happen if the U.S. government… Read More 

 
 
 
 
 
 Reports 
 Skating to Where the Puck Is Going

 
 October 2023 
 
 AI capabilities are evolving quickly and pose novel—and likely significant—risks. In these rapidly changing conditions, how can policymakers effectively anticipate and manage risks from the most advanced and capable AI systems at the frontier of… Read More 

 
 
 
 
 
 Blog 
 Securing AI Makes for Safer AI

 
 July 2023 
 
 Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to&hell

... (truncated, 5 KB total)
Resource ID: a730f4a09ae55698 | Stable ID: sid_J0l0UwyaGt