Skip to content
Longterm Wiki
Back

24usqpep0ejc5w6hod3dulxwciwp0djs6c6ufp96av3t4whuxovj72wfkdjxu82yacb7430qjm8adbd5ezlt4592dq4zrvadcn9j9n-0btgdzpiojfzno16-fnsnu7xd" /> <link rel="preconnect" href="https://substackcdn.com" /> <title data-rh="true">AI Safety Newsletter #40: California AI Legislation</title> <meta data-rh="true" name="theme-color" content="#ffffff"/><meta data-rh="true" property="og:type" content="article"/><meta data-rh="true

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Center for AI Safety

Published by the Center for AI Safety (CAIS), this newsletter issue focuses on California's AI legislative efforts, relevant for tracking US state-level AI policy developments and their potential influence on broader AI governance.

Metadata

Importance: 52/100newsletternews

Summary

This edition of the Center for AI Safety's newsletter covers California's AI legislation landscape, analyzing key bills and their implications for AI safety governance. It examines proposed regulations aimed at managing risks from advanced AI systems at the state level.

Key Points

  • Covers California legislative efforts to regulate AI safety, including analysis of specific bills and their provisions
  • Discusses the debate between AI safety advocates and industry stakeholders over the appropriate scope of AI regulation
  • Examines how state-level AI legislation could set precedents for broader national and international governance frameworks
  • Highlights the tension between enabling AI innovation and implementing safeguards against potential harms from advanced AI systems

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 9, 202614 KB
AI Safety Newsletter #40: California AI Legislation 
 
 
 
 
 

 

 

 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 

 

 
 
 

 

 

 

 

 
 
 
 

 
 

 
 
 

 

 

 
 AI Safety Newsletter 

 Subscribe Sign in AI Safety Newsletter #40: California AI Legislation

 Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?

 Corin Katzke , Julius Simonelli , Alexa Pan , and Dan Hendrycks Aug 21, 2024 12 1 Share Article voiceover 0:00 -13:59 Audio playback is not supported on your browser. Please upgrade. Welcome to the AI Safety Newsletter by the Center for AI Safety . We discuss developments in AI and AI safety. No technical background required. 

 Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts . 

 SB 1047, the Most-Discussed California AI Legislation 

 California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has garnered attention due to California's unique position in the tech landscape. If passed, SB 1047 would apply to all companies performing business in the state, potentially setting a precedent for AI governance more broadly. 

 This newsletter examines the current state of the bill, which has had various amendments in response to feedback from various stakeholders. We'll cover recent debates surrounding the bill, support from AI experts, opposition from the tech industry, and public opinion based on polling.

 The bill mandates safety protocols, testing procedures, and reporting requirements for covered AI models. The bill was introduced by State Senator Scott Wiener, and is cosponsored by CAIS Action Fund, and aims to establish safety guardrails for the most powerful AI models. Specifically, it would require companies developing AI systems that cost over $100 million to develop and are trained on a massive amount of compute to implement comprehensive safety measures, conduct rigorous testing, and mitigate potential severe risks. The bill also includes new whistleblower protections. 

 A group of renowned AI experts has thrown their weight behind the bill. Earlier this month, Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell penned a letter expressing their strong support for SB 1047. They argue that the next generation of AI systems pose "severe risks" if "developed without sufficient care and oversight." Bengio told TIME , "I worry that technology companies will not solve these significant risks on their own while locked in their race for market share and profit maximization." 

 However, SB 1047 faces opposition from some industry voices. Perhaps the most prominent critic of the bill has been venture capital firm Andreessen Horowitz (a16z). They argue th

... (truncated, 14 KB total)
Resource ID: b6ff47916871e464 | Stable ID: sid_gDEI2n6Qy5