Skip to content
Longterm Wiki
Back

A New York Legislator Wants to Pick Up the Pieces of the Dead California AI Bill

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: MIT Technology Review

Relevant to tracking US state-level AI governance efforts; follows the high-profile defeat of California SB 1047 and signals continued legislative interest in mandatory AI safety requirements.

Metadata

Importance: 45/100news articlenews

Summary

Following the veto of California's SB 1047 AI safety bill, a New York state legislator is introducing similar legislation that would impose safety requirements on developers of large AI models. The article covers the political and regulatory landscape around state-level AI governance efforts in the United States.

Key Points

  • California's SB 1047, which would have required safety testing and oversight for large AI models, was vetoed by Governor Newsom in 2024.
  • A New York legislator is drafting analogous legislation to apply similar AI safety mandates at the state level.
  • The effort reflects ongoing tension between AI industry opposition and advocates pushing for mandatory safety standards.
  • State-level AI regulation is gaining momentum as federal action on AI governance remains uncertain.
  • The bill would likely target frontier AI developers by imposing requirements tied to model compute thresholds.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 7, 202612 KB
A New York legislator wants to pick up the pieces of the dead California AI bill | MIT Technology Review 
 
 
 
 

 
 

 
 

 

 
 

 
 

 
 
 
 
 

 
 
 
 
 You need to enable JavaScript to view this site.
 

 Skip to Content The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047 , with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.”

 Assemblymember Alex Bores hopes his bill, currently an unpublished draft—subject to change—that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law.

 SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support .

 However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren . Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill. 

 
 Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level , anywhere in the US, where the most powerful systems are developed.

 Related Story

 The AI Act is done. Here’s what will (and won’t) change Read next Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. 

 
 The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people. 

 Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence.

 The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that ca

... (truncated, 12 KB total)
Resource ID: 179b5aa56b02e44e | Stable ID: sid_PeHYAuo9kY