Skip to content
Longterm Wiki
Back

24usqpep0ejc5w6hod3dulxwciwp0djs6c6ufp96av3t4whuxovj72wfkdjxu82yacb7430qjm8adbd5ezlt4592dq4zrvadcn9j9n-0btgdzpiojfzno16-fnsnu7xd" /> <link rel="preconnect" href="https://substackcdn.com" /> <title data-rh="true">AI Safety Newsletter #42: Newsom Vetoes SB 1047</title> <meta data-rh="true" name="theme-color" content="#ffffff"/><meta data-rh="true" property="og:type" content="article"/><meta data-rh="true

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Center for AI Safety

Published by the Center for AI Safety (CAIS), this newsletter issue covers the high-profile veto of California SB 1047 in late 2024, a pivotal moment in US AI governance debates relevant to anyone tracking AI safety policy developments.

Metadata

Importance: 62/100blog postnews

Summary

This edition of the CAIS AI Safety Newsletter covers California Governor Gavin Newsom's veto of SB 1047, a landmark AI safety bill that would have imposed safety requirements on large AI models. The newsletter likely analyzes the implications of the veto for AI governance and the broader AI safety policy landscape.

Key Points

  • Governor Newsom vetoed California's SB 1047, a significant AI safety bill targeting frontier AI model developers
  • The veto represents a major setback for state-level AI safety regulation in the US
  • SB 1047 would have required safety testing and kill-switch capabilities for large AI models above a compute threshold
  • The decision has broader implications for the trajectory of AI governance at state and federal levels
  • The newsletter contextualizes the veto within ongoing debates between AI safety advocates and industry opponents

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 9, 202614 KB
AI Safety Newsletter #42: Newsom Vetoes SB 1047 
 
 
 
 
 

 

 

 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 

 

 
 
 

 

 

 

 

 
 
 
 

 
 

 
 
 

 

 

 
 AI Safety Newsletter 

 Subscribe Sign in AI Safety Newsletter #42: Newsom Vetoes SB 1047

 Plus, OpenAI’s o1, and AI Governance Summary

 +1 Corin Katzke , Julius Simonelli , Alexa Pan , and 2 others Oct 01, 2024 6 1 Share Article voiceover 0:00 -13:10 Audio playback is not supported on your browser. Please upgrade. Welcome to the AI Safety Newsletter by the Center for AI Safety . We discuss developments in AI and AI safety. No technical background required. 

 Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts . 

 Newsom Vetoes SB 1047 

 On Sunday, Governor Newsom vetoed California’s Senate Bill 1047 (SB 1047), the most ambitious legislation to-date aimed at regulating frontier AI models. The bill, introduced by Senator Scott Wiener and covered in a previous newsletter , would have required AI developers to test frontier models for hazardous capabilities and take steps to mitigate catastrophic risks. ( CAIS Action Fund was a co-sponsor of SB 1047.) 

 Newsom states that SB 1047 is not comprehensive enough. In his letter to the California Senate , the governor argued that “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” The bill requires testing for models that use large amounts of computing power, but he says “by focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security.” 

 Sponsors and opponents react to the veto. Senator Wiener released a statement calling the veto a "missed opportunity" for California to lead in tech regulation and stated that “we are all less safe as a result.” Statements from other sponsors of the bill can be found here . Meanwhile, opponents of the bill, such as venture capitalist Marc Andreessen celebrated the veto. Major Newsom donors such as Reid Hoffman and Ron Conway, who have financial interests in AI companies, also celebrated the veto. 

 OpenAI’s o1 

 OpenAI recently launched o1 , a series of AI models with advanced reasoning capabilities. In this story, we explore o1’s capabilities and their implications for scaling and safety. We also cover funding and governance updates at OpenAI.  

 o1 models are trained with reinforcement learning to perform complex reasoning. The models are trained to produce long, hidden chains of thought before responding to the user. This allows them to break down hard problems into simpler steps, notice and correct its own mistakes, and test different pr

... (truncated, 14 KB total)
Resource ID: 7f7da43577c5844e | Stable ID: sid_Sfm39PNslB