Back
TechCrunch: California's legislature just passed AI bill SB 1047
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: TechCrunch
SB 1047 was a major 2024 state-level AI safety bill in California; understanding the debate around it is important context for AI governance discussions, even though Governor Newsom ultimately vetoed the bill in September 2024.
Metadata
Importance: 58/100news articlenews
Summary
TechCrunch covers the California legislature's passage of SB 1047, a landmark AI safety bill targeting large frontier models. The bill imposes safety obligations on developers of powerful AI systems, while major tech companies and industry groups argue it will stifle innovation and push AI development out of California.
Key Points
- •SB 1047 passed the California legislature and requires developers of large AI models to implement safety measures and conduct risk assessments before deployment.
- •The bill targets models trained above a compute threshold, aiming to prevent catastrophic or 'critical harms' from frontier AI systems.
- •Silicon Valley critics, including major tech companies and some AI researchers, warn the bill is technically flawed and could harm California's AI ecosystem.
- •Proponents argue the bill is a necessary precautionary measure given the potential risks of increasingly capable AI systems.
- •The bill ultimately faced Governor Newsom's veto decision, making this a key moment in U.S. state-level AI governance debates.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | Policy | 66.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202617 KB
California’s legislature just passed AI bill SB 1047; here’s why some hope the governor won’t sign it | TechCrunch
–:–:–:–
THIS WEEK ONLY: Save up to $500 on your Disrupt pass. Offer ends April 10, 11:59 p.m. PT. Register here.
Save up to $680 on your Disrupt 2026 pass. Ends 11:59 p.m. PT tonight. REGISTER NOW .
Close
Image Credits: Bryce Durbin
Government & Policy
California’s legislature just passed AI bill SB 1047; here’s why some hope the governor won’t sign it
Maxwell Zeff
1:26 PM PDT · August 30, 2024
Update: California’s Appropriations Committee passed SB 1047 with significant amendments that change the bill on Thursday, August 15. You can read about them here .
Outside of sci-fi films, there’s no precedent for AI systems killing people or being used in massive cyberattacks. However, some lawmakers want to implement safeguards before bad actors make that dystopian future a reality. A California bill, known as SB 1047, tries to stop real-world disasters caused by AI systems before they happen. It passed the state’s senate in August, and now awaits an approval or veto from California Governor Gavin Newsom.
While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players large and small, including venture capitalists, big tech trade groups, researchers and startup founders. A lot of AI bills are flying around the country right now, but California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has become one of the most controversial. Here’s why.
What would SB 1047 do?
SB 1047 tries to prevent large AI models from being used to cause “critical harms” against humanity.
The bill gives examples of “critical harms” as a bad actor using an AI model to create a weapon that results in mass casualties, or instructing one to orchestrate a cyberattack causing more than $500 million in damages (for comparison, the CrowdStrike outage is estimated to have caused upwards of $5 billion). The bill makes developers — that is, the companies that develop the models — liable for implementing sufficient safety protocols to prevent outcomes like these.
What models and companies are subject to these rules?
SB 1047’s rules would only apply to the world’s largest AI models: ones that cost at least $100 million and use 10^26 FLOPS (floating point operations, a way of measuring computation) during training. That’s a huge amount of compute, though OpenAI CEO Sam Al
... (truncated, 17 KB total)Resource ID:
989ab2864e1f5ddb | Stable ID: sid_cgLoeyq2dm