Anthropic Is Endorsing SB 53
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Anthropic
Anthropic's public endorsement of California SB 53 is relevant to AI governance discussions as a case study in how leading frontier labs are engaging with state-level safety regulation and formalizing voluntary safety commitments into legal obligations.
Metadata
Summary
Anthropic announces its endorsement of California's SB 53, a bill requiring frontier AI developers to publish safety frameworks, release transparency reports, report critical incidents, and provide whistleblower protections. The bill formalizes practices Anthropic and other major labs already follow, focusing on catastrophic risk disclosure rather than prescriptive technical mandates that characterized the vetoed SB 1047.
Key Points
- •SB 53 requires frontier AI companies to develop and publish safety frameworks describing how they manage catastrophic risks, including mass casualty or substantial monetary damage scenarios.
- •Companies must release public transparency reports and report critical safety incidents to the state within 15 days of occurrence.
- •The bill includes clear whistleblower protections for violations and substantial dangers to public health/safety from catastrophic risk.
- •Unlike SB 1047, SB 53 uses disclosure requirements rather than prescriptive technical mandates, reflecting a 'trust but verify' approach endorsed by California's Joint Policy Working Group.
- •Anthropic argues federal-level AI safety regulation is preferable but supports SB 53 given the pace of AI advancement outstripping federal consensus.
Cached Content Preview
Announcements Anthropic is endorsing SB 53
Sep 8, 2025 Anthropic is endorsing SB 53 , the California bill that governs powerful AI systems built by frontier AI developers like Anthropic. We’ve long advocated for thoughtful AI regulation and our support for this bill comes after careful consideration of the lessons learned from California's previous attempt at AI regulation ( SB 1047 ). While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.
Governor Newsom assembled the Joint California Policy Working Group —a group of academics and industry experts—to provide recommendations on AI governance. The working group endorsed an approach of 'trust but verify ’, and Senator Scott Wiener’s SB 53 implements this principle through disclosure requirements rather than the prescriptive technical mandates that plagued last year's efforts.
What SB 53 achieves
SB 53 would require large companies developing the most powerful AI systems to:
Develop and publish safety frameworks, which describe how they manage, assess, and mitigate catastrophic risks—risks that could foreseeably and materially contribute to a mass casualty incident or substantial monetary damages.
Release public transparency reports summarizing their catastrophic risk assessments and the steps taken to fulfill their respective frameworks before deploying powerful new models.
Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.
Provide clear whistleblower protections that cover violations of these requirements as well as specific and substantial dangers to public health/safety from catastrophic risk.
Be publicly accountable for the commitments made in their frameworks or face monetary penalties.
These requirements would formalize practices that Anthropic and many other frontier AI companies already follow. At Anthropic, we publish our Responsible Scaling Policy , detailing how we evaluate and mitigate risks as our models become more capable. We release comprehensive system cards that document model capabilities and limitations. Other frontier labs ( Google DeepMind , OpenAI , Microsoft ) have adopted similar approaches while vigorously competing at the frontier. Now all covered models will be legally held to this standard. The bill also appropriately focuses on large companies developing the most powerful AI systems, while providing exemptions for startups and smaller companies that are less likely to develop powerful models and should not bear unnecessary regulatory burdens.
SB 53’s transparency requirements will have an important impact on frontier AI safety. Without it, labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure
... (truncated, 5 KB total)523ff672174f35cf | Stable ID: Abr7zVmFCg