Back
24usqpep0ejc5w6hod3dulxwciwp0djs6c6ufp96av3t4whuxovj72wfkdjxu82yacb7430qjm8adbd5ezlt4592dq4zrvadcn9j9n-0btgdzpiojfzno16-fnsnu7xd" /> <link rel="preconnect" href="https://substackcdn.com" /> <title data-rh="true">Guide to SB 1047 - by Zvi Mowshowitz</title> <meta data-rh="true" property="og:type" content="article"/><meta data-rh="true
blogCredibility Rating
2/5
Mixed(2)Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.
Rating inherited from publication venue: Substack
SB 1047 was a California state bill (2024) that sparked significant debate in the AI community; this guide by Zvi Mowshowitz (a prominent AI safety blogger) offers detailed analysis of its provisions and implications for AI governance.
Metadata
Importance: 65/100blog postanalysis
Summary
Zvi Mowshowitz provides a comprehensive guide and analysis of California's SB 1047, a landmark AI safety bill that would impose safety requirements on large AI model developers. The post examines the bill's provisions, likely impacts, and the debate surrounding it in the AI safety and tech communities.
Key Points
- •SB 1047 proposed mandatory safety evaluations and documentation requirements for AI models above certain compute thresholds in California.
- •The bill targeted frontier AI developers, requiring them to implement safety protocols and demonstrate compliance before deployment.
- •Zvi analyzes both the potential benefits of the legislation for AI safety and the concerns raised by critics in the tech industry.
- •The guide covers key provisions including hazardous capability evaluations, incident reporting, and developer liability frameworks.
- •SB 1047 became a major flashpoint in debates about government regulation of frontier AI development.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | Policy | 66.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202694 KB
Guide to SB 1047 - by Zvi Mowshowitz
Don't Worry About the Vase
Subscribe Sign in Guide to SB 1047
Zvi Mowshowitz Aug 20, 2024 31 21 5 Share We now likely know the final form of California’s SB 1047.
There have been many changes to the bill as it worked its way to this point.
Many changes, including some that were just announced, I see as strict improvements.
Anthropic was behind many of the last set of amendments at the Appropriations Committee. In keeping with their "Support if Amended" letter, there are a few big compromises that weaken the upside protections of the bill somewhat in order to address objections and potential downsides.
The primary goal of this post is to answer the question: What would SB 1047 do?
I offer two versions: Short and long.
The short version summarizes what the bill does, at the cost of being a bit lossy.
The long version is based on a full RTFB: I am reading the entire bill, once again.
In between those two I will summarize the recent changes to the bill, and provide some practical ways to understand what the bill does.
After, I will address various arguments and objections, reasonable and otherwise.
My conclusion: This is by far the best light-touch bill we are ever going to get.
Short Version (tl;dr): What Does SB 1047 Do in Practical Terms?
This section is intentionally simplified, but in practical terms I believe this covers the parts that matter. For full details see later sections.
First, I will echo the One Thing To Know.
If you do not train either a model that requires $100 million or more in compute, or fine tune such an expensive model using $10 million or more in your own additional compute (or operate and rent out a very large computer cluster)?
Then this law does not apply to you, at all.
This cannot later be changed without passing another law.
(There is a tiny exception: Some whistleblower protections still apply. That’s it.)
Also the standard required is now reasonable care, the default standard in common law. No one ever has to ‘prove’ anything, nor need they fully prevent all harms.
With that out of the way, here is what the bill does in practical terms.
IF AND ONLY IF you wish to train a model using $100 million or more in compute (including your fine-tuning costs):
You must create a reasonable safety and security plan (SSP) such that your model does not pose an unreasonable risk of causing or materially enabling critical harm: mass casualties or incidents causing $500 million or more in damages.
That SSP must explain what you will do, how you will do it, and why. It must have objective evaluation criteria for determining compliance. It must include cybersecurity protocols to prevent the
... (truncated, 94 KB total)Resource ID:
90358335122f2a05 | Stable ID: sid_Dc4ykJ8XJd