Back
A Heated California Debate Offers Lessons for AI Safety Governance
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Carnegie Endowment
Relevant to understanding how AI safety regulation is contested at the state level in the US; the SB 1047 veto is a key case study in the political economy of AI governance circa 2024.
Metadata
Importance: 52/100opinion piececommentary
Summary
This Carnegie Endowment commentary analyzes California's SB 1047, a bipartisan AI safety bill that passed the legislature but was vetoed by Governor Newsom in September 2024. It examines the divisions the bill exposed within the AI community and extracts lessons for future AI safety governance efforts at the subnational and national level.
Key Points
- •California's SB 1047 aimed to mandate safety testing for frontier AI models before release, addressing risks like weaponization for bioweapons or infrastructure attacks.
- •Governor Newsom vetoed the bill on Sept 29, 2024, citing need for a different approach while affirming safety objectives and promising new AI guardrail initiatives.
- •The debate exposed significant rifts among AI researchers, tech companies, and policymakers over the appropriate scope of government regulation.
- •The bill's veto offers lessons for proponents of AI safety regulation on how to tailor future legislative efforts more effectively.
- •The episode highlights the challenges of subnational AI governance and how state-level debates are watched closely by global policymakers.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| US State AI Legislation Landscape | Analysis | 70.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202613 KB
A Heated California Debate Offers Lessons for AI Safety Governance | Carnegie Endowment for International Peace {
"authors": [
"Scott Kohler",
"Ian Klaus"
],
"type": "commentary",
"centerAffiliationAll": "dc",
"centers": [
"Carnegie Endowment for International Peace"
],
"collections": [
"Artificial Intelligence",
"Emerging AI Policy",
"Violence and Conflict",
"Tech in Context"
],
"englishNewsletterAll": "ctw",
"nonEnglishNewsletterAll": "",
"primaryCenter": "Carnegie Endowment for International Peace",
"programAffiliation": "CC",
"programs": [
"Carnegie California"
],
"projects": [],
"regions": [
"United States"
],
"topics": [
"AI",
"Technology",
"Subnational Affairs"
]
} Photo by trekandshoot/iStock
Commentary A Heated California Debate Offers Lessons for AI Safety Governance
The bill exposed divisions within the AI community, but proponents of safety regulation can heed the lessons of SB 1047 and tailor their future efforts accordingly.
English Link Copied By Scott Kohler and Ian Klaus Published on Oct 8, 2024 Program
Carnegie California
Carnegie California links developments in California and the West Coast with national and global conversations around technology, democracy, and trans-Pacific relationships. At a distance from national capitals, and located in one of the world’s great experiments in pluralist democracy, Carnegie California engages a wide array of stakeholders as partners in its research and policy engagement.
Learn More In late August, the California legislature managed a feat that has eluded the U.S. Congress: passing a bipartisan bill designed to ensure the safe development of advanced artificial intelligence (AI) models. That legislation, Senate Bill (SB) 1047, aimed to regulate frontier technologies emerging from an industry closely tied to California that is now raising hundreds of billions of dollars in investment and promising to reshape work, health care, national security, and even routine tasks of daily life.
On September 29, Governor Gavin Newsom vetoed the bill . His decision—following a pitched debate exposing rifts among AI researchers, technology companies, and policymakers —was tracked by leaders around the world. In his veto message, while Newsom affirmed his support for the bill’s safety objectives, announcing a new effort to craft guardrails for AI deployment and committing to continue working with the legislature, he ultimately concluded that a different approach was needed.
The problem the bill sought to address, at least in principle, is straightforward: the upcoming generation of frontier models could benefit millions of people. However, they could also risk serious harm to California’s 40 million residents and people around the world. For example, there are worries they could be weaponized to attack critical infrastructure or create biological or cyber weapons. Many companies have voluntarily agreed to test their models before release to reduce these ris
... (truncated, 13 KB total)Resource ID:
61d484269e6dbd8c | Stable ID: sid_C3yNXTpzWd