Back
24usqpep0ejc5w6hod3dulxwciwp0djs6c6ufp96av3t4whuxovj72wfkdjxu82yacb7430qjm8adbd5ezlt4592dq4zrvadcn9j9n-0btgdzpiojfzno16-fnsnu7xd" /> <link rel="preconnect" href="https://substackcdn.com" /> <title data-rh="true">Newsom Vetoes SB 1047 - by Zvi Mowshowitz</title> <meta data-rh="true" property="og:type" content="article"/><meta data-rh="true
blogCredibility Rating
2/5
Mixed(2)Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.
Rating inherited from publication venue: Substack
Written by Zvi Mowshowitz in September 2024, this post provides analysis of a significant U.S. state-level AI governance event, relevant to understanding the political landscape of AI safety regulation.
Metadata
Importance: 58/100opinion piececommentary
Summary
Zvi Mowshowitz analyzes California Governor Gavin Newsom's veto of SB 1047, a landmark AI safety bill that would have imposed safety requirements on large AI models. The post examines the reasoning behind the veto, the political dynamics involved, and what the outcome means for AI governance efforts more broadly.
Key Points
- •Governor Newsom vetoed SB 1047, citing concerns it would stifle AI innovation and harm California's tech industry competitiveness.
- •The bill would have required safety evaluations and incident reporting for large frontier AI models trained above a compute threshold.
- •Zvi evaluates the arguments for and against the veto, assessing whether the bill's safety provisions were worth the potential downsides.
- •The veto reflects ongoing tension between AI safety advocates pushing for regulatory oversight and industry interests opposing mandatory compliance burdens.
- •The outcome has implications for future state and federal AI policy efforts, signaling political resistance to compute-threshold-based regulation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | Policy | 66.0 |
Cached Content Preview
HTTP 200Fetched Apr 10, 202659 KB
Newsom Vetoes SB 1047 - by Zvi Mowshowitz
Don't Worry About the Vase
Subscribe Sign in Newsom Vetoes SB 1047
Zvi Mowshowitz Oct 01, 2024 45 58 4 Share It’s over, until such a future time as either we are so back, or it is over for humanity.
Gavin Newsom has vetoed SB 1047 .
Newsom’s Message In Full
Quoted text is him, comments are mine.
To the Members of the California State Senate: I am returning Senate Bill 1047 without my signature.
This bill would require developers of large artificial intelligence (Al) models, and those providing the computing power to train such models, to put certain safeguards and policies in place to prevent catastrophic harm. The bill would also establish the Board of Frontier Models - a state entity - to oversee the development of these models.
It is worth pointing out here that mostly the ‘certain safeguards and policies’ was ‘have a policy at all, tell us what it is and then follow it.’ But there were some specific things that were requires, so Newsom is indeed technically correct here.
California is home to 32 of the world's 50 leading Al companies, pioneers in one of the most significant technological advances in modern history. We lead in this space because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of intellectual freedom. As stewards and innovators of the future, I take seriously the responsibility to regulate this industry.
Queue the laugh track. No, that’s not why California leads, but sure, whatever.
This year, the Legislature sent me several thoughtful proposals to regulate AI companies in response to current, rapidly evolving risks - including threats to our democratic process, the spread of misinformation and deepfakes, risks to online privacy, threats to critical infrastructure, and disruptions in the workforce. These bills, and actions by my Administration, are guided by principles of accountability, fairness, and transparency of AI systems and deployment of AI technology in California.
He signed a bunch of other AI bills. It is quite the rhetorical move to characterize those bills as ‘thoughtful’ in the context of SB 1047, which (like or hate its consequences) was by far the most thoughtful bill, was centrally a transparency bill, and was clearly an accountability bill. What you call ‘fair’ is up to you I guess.
SB 1047 magnified the conversation about threats that could emerge from the deployment of AI. Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system's actual risks regardless of these factors. This global discuss
... (truncated, 59 KB total)Resource ID:
f869b223038f1cba | Stable ID: sid_Q55D7Zoko1