Skip to content
Longterm Wiki
Back

Meta, OpenAI, and House Speaker Nancy Pelosi opposed the bill

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: CSET Georgetown

Covers a pivotal 2024 US AI policy moment: the defeat of California's most ambitious AI safety legislation, relevant to understanding lobbying dynamics and the political landscape around AI governance.

Metadata

Importance: 62/100news articlenews

Summary

California Governor Gavin Newsom vetoed SB 1047 in September 2024, a landmark AI safety bill that would have imposed extensive safety protocols on large AI systems. Newsom cited concerns about the bill's narrow focus on large models while ignoring smaller system risks, though lobbying by major tech firms and opposition from congressional leaders also played a role. Newsom did sign more targeted AI bills covering training data disclosure and AI-generated content watermarking.

Key Points

  • SB 1047 would have created some of the most extensive AI safety protocols in the US, but was vetoed by Governor Newsom in September 2024.
  • Newsom's stated rationale was that the bill focused too narrowly on large models and ignored risks from smaller or context-specific AI deployments.
  • Heavy lobbying by tech and VC firms, plus opposition from OpenAI, Meta, and House Speaker Pelosi, were cited as key factors influencing the veto.
  • The bill had evolved significantly post-introduction, gaining support from Anthropic and Elon Musk's companies, but remained divisive across the AI industry.
  • Newsom signed narrower AI bills requiring training data disclosure (AB-2013) and AI-generated content watermarking (SB-942) as alternatives.

Cited by 1 page

PageTypeQuality
Failed and Stalled AI ProposalsAnalysis63.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20264 KB
Governor Newsom Vetoes Sweeping AI Regulation, SB 1047 | Center for Security and Emerging Technology 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
 

 
 
 

 
 

 
 
 
 
 
 
 
 
 Skip to main content 
 
 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 Newsletter Bytes 

 Governor Newsom Vetoes Sweeping AI Regulation, SB 1047

 
 
 Alex Friedland

 
 October 17, 2024 
 
 Subscribe to policy.ai 
 
 

 

 
 
 
 
 California Governor Gavin Newsom  vetoed a closely watched AI regulation , SB 1047, that would have implemented some of the country’s most extensive safety protocols for powerful AI systems.  As we covered last month , California’s status as home to many of the world’s top AI developers meant the bill’s progress was closely watched and hotly contested. 

 In  a statement , Newsom wrote that while the bill was “well-intentioned,” it was too focused on the largest models and ignored the risks posed by smaller models or systems deployed in particularly risky environments. But  observers also pointed  to robust lobbying efforts by tech and venture capital firms, as well as opposition from prominent members of California’s congressional delegation, as key factors in Newsom’s decision. 

 The bill had undergone significant changes since its introduction in response to industry feedback, earning it the support of some major AI developers  like Anthropic and Tesla, SpaceX, and xAI CEO Elon Musk . But others,  like San Francisco-based OpenAI , raised concerns about the bill’s impact on innovation and argued that AI regulation was best left to the federal government. 

 While Newsom vetoed SB 1047, he did sign  a number of more targeted AI bills , including  AB-2013 , which will require generative AI companies to disclose information about their training data, and  SB-942 , a law that will require watermarking for AI-generated content.

 More:   Senator Wiener Responds to Governor Newsom Vetoing Landmark AI Bill  |  Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians 

 

 This newsletter excerpt is from the October 17, 2024, edition of policy.ai — CSET’s newsletter on artificial intelligence, emerging technology, and security policy, written by Alex Friedland. Other stories from this edition include: 

 
 DOD Announces Replicator 2 — Counter-Drone Defenses the Focus

 OpenAI Raises $6.6 Billion — But Departures Point to Difficult Transition

 Commerce Considering Country-Specific Chip Export Caps

 FTC Cracks Down on AI Over-Promising

 OMB Issues Guidance on Responsible AI Acquisition

 

 Read the full newsletter and subscribe to receive every edition of policy.ai.

 

 
 
 
 
 
 Author

 Alex Friedland 
 

 
 
 
 Originally Published

 October 17, 2024

 
 

 
 
 
 
 
 

 

 
 
 
 
 

 

 
 
 
 
 This website uses cookies.

 To learn more, please review this policy . By continuing to browse the site, you agree to these terms. 

 Okay Disable Cookies Privacy & Cookies Policy 
 
 
 

... (truncated, 4 KB total)
Resource ID: 2408076a24b70f71 | Stable ID: sid_AXvtIECFyn