California SB 53
Quick Assessment
Section titled “Quick Assessment”| Aspect | Details |
|---|---|
| Status | Signed into law September 29, 2025; effective January 1, 20261 |
| Scope | Frontier AI models trained with >10²⁶ FLOPs by developers with >$500M annual revenue2 |
| Key Requirements | Public safety frameworks, incident reporting (15 days standard/24 hours imminent), whistleblower protections3 |
| Enforcement | California Attorney General; civil penalties up to $1M per violation (up to $10M for repeated knowing violations)4 |
| Innovation Support | CalCompute consortium for public AI computing cluster5 |
| Significance | First U.S. state law specifically targeting frontier AI safety and catastrophic risks6 |
Overview
Section titled “Overview”California Senate Bill 53 (SB 53), formally known as the Transparency in Frontier Artificial Intelligence Act (TFAIA), represents the first U.S. state legislation to establish a comprehensive regulatory framework for frontier AI models. Signed by Governor Gavin Newsom on September 29, 2025, and taking effect January 1, 2026, the law mandates that large AI developers publicly disclose their approaches to managing catastrophic risks—defined as foreseeable material risks that could cause 50 or more deaths, $1 billion or more in damages, expert-level assistance in creating chemical, biological, radiological, or nuclear weapons, autonomous criminal conduct or cyberattacks, or evasion of developer and user control.7
The legislation emerged from the failure of SB 1047, a more ambitious AI safety bill that Newsom vetoed in 2024. In response to that veto, Newsom convened the Joint California Policy Working Group on AI Frontier Models, whose June 2025 report directly informed SB 53’s structure. The final law represents what supporters describe as a pragmatic, evidence-based approach that balances innovation with safety oversight, while critics view it as making considerable concessions to industry concerns in order to secure passage.8
SB 53 applies specifically to “large frontier developers”—entities with annual gross revenues exceeding $500 million that train or have trained foundation models using more than 10²⁶ integer or floating-point operations. This threshold currently applies to approximately 5-8 companies, including OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100, AnthropicLabAnthropicComprehensive profile of Anthropic tracking its rapid commercial growth (from $1B to $7B annualized revenue in 2025, 42% enterprise coding market share) alongside safety research (Constitutional AI...Quality: 51/100, Google DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100, Meta, and Microsoft.9 The law establishes four primary mechanisms: mandatory public transparency frameworks, critical safety incident reporting to the California Office of Emergency Services, robust whistleblower protections with civil penalties for retaliation, and the creation of CalCompute, a public computing consortium to democratize access to AI infrastructure.10
History and Legislative Development
Section titled “History and Legislative Development”The path to SB 53 began with the more ambitious SB 1047, authored by Senator Scott Wiener and passed by the California Legislature in 2024. That bill included stricter requirements such as pre-training safety protocols, third-party audits, “kill switch” capabilities, 72-hour incident reporting deadlines, and penalties of up to 30% of compute costs. However, Governor Newsom vetoed SB 1047, calling instead for an approach “informed by an empirical trajectory analysis of AI systems and capabilities.”11
In response to the veto, Newsom established the Joint California Policy Working Group on AI Frontier Models in early 2025. This expert group released its recommendations in June 2025, emphasizing whistleblower protections, alignment with leading safety practices, and structured transparency mechanisms. These recommendations became the foundation for SB 53.12
The legislative process for SB 53 reached a critical stage in early September 2025. After intense negotiations with industry stakeholders over definitions and disclosure requirements, Senator Wiener cut off negotiations on September 4 and turned to the governor’s office to finalize changes in the final hours before the legislative deadline. In the early morning hours of September 13, 2025—the final day of the legislative session—Wiener presented the revised measure to the Senate, which passed with a bipartisan vote of 29-8.13
Governor Newsom signed SB 53 into law on September 29, 2025. The bill received bipartisan support in the California Legislature and backing from AI companies (notably including AnthropicLabAnthropicComprehensive profile of Anthropic tracking its rapid commercial growth (from $1B to $7B annualized revenue in 2025, 42% enterprise coding market share) alongside safety research (Constitutional AI...Quality: 51/100), researchers, civil society groups, and labor organizations. Key sponsoring organizations included Encode AI, Economic Security California Action, and Secure AI Project.14
Key Provisions and Requirements
Section titled “Key Provisions and Requirements”Transparency Frameworks
Section titled “Transparency Frameworks”Large frontier developers must publicly publish on their websites a comprehensive safety and security framework that incorporates national standards (such as NIST AI RMF), international standards (such as ISO/IEC 42001), and industry-consensus best practices. These frameworks must detail how the developer identifies, assesses, and mitigates catastrophic risks, including governance structures, cybersecurity measures, and alignment strategies. The frameworks must be reviewed and updated annually.15
Critical Safety Incident Reporting
Section titled “Critical Safety Incident Reporting”The law establishes a reporting mechanism through the California Office of Emergency Services (OES) for both developers and the public to report critical safety incidents. Developers must report incidents within 15 days of discovery, or within 24 hours if the incident poses an imminent danger of death or serious injury. Critical safety incidents include unauthorized access to model weights, loss-of-control events causing harm, deceptive model behavior, and other occurrences that threaten life or serious harm. Beginning January 1, 2027, OES will publish annual anonymized reports summarizing these incidents.16
Whistleblower Protections
Section titled “Whistleblower Protections”SB 53 requires developers to establish anonymous internal reporting processes for employees and contractors to disclose catastrophic risks or violations of the law. The legislation prohibits retaliation against whistleblowers and extends protections to contractors and freelancers who report concerns to the Attorney General or federal authorities. Successful whistleblower plaintiffs may recover attorney’s fees, and employers face civil penalties for retaliation, enforced by the California Attorney General.17
CalCompute Public Computing Cluster
Section titled “CalCompute Public Computing Cluster”The law establishes a 14-member consortium within the Government Operations Agency to develop a framework for CalCompute, a fully state-owned and hosted public cloud computing cluster with accompanying human expertise. The consortium includes representatives from the University of California, other academic institutions and national labs, labor organizations, ethicists and consumer rights advocates, AI and technology experts, and relevant state agencies. The consortium must submit its framework report to the Legislature by January 1, 2027, after which it dissolves. CalCompute provisions are operative only upon appropriation in a budget act.18
The CalCompute initiative aims to address the concentration of computing power among major technology companies (Amazon, Google, and Microsoft) by creating public infrastructure for safe and ethical AI research and innovation accessible to startups, researchers, and public-interest organizations.19
Enforcement and Penalties
Section titled “Enforcement and Penalties”The California Attorney General enforces SB 53 through a tiered penalty structure:
- Unknowing violations with no material risk: up to $10,000 per violation (with a 30-day cure period for first offenses)
- Knowing violations with no material risk, or unknowing violations with material risk: up to $100,000 per violation
- Knowing violations involving material or catastrophic risk (first offense): up to $1,000,000 per violation
- Knowing violations involving material or catastrophic risk (subsequent offenses): up to $10,000,000 per violation20
This penalty structure represents a significant reduction from SB 1047’s approach, which allowed penalties of up to 30% of compute costs. The cap at $1 million (or $10 million for repeat violations) was one of several concessions made to improve the bill’s chances of passage.21
Relationship to AI Safety Concerns
Section titled “Relationship to AI Safety Concerns”California SB 53 directly addresses several categories of AI safety risks, with particular emphasis on catastrophic and existential risks. The law’s definition of catastrophic risks explicitly includes scenarios central to AI safety discourse:
Loss of Control and Alignment Failures: The law defines catastrophic risk to include “the evasion of human control by a covered developer or a covered deployer of the frontier model,” directly targeting concerns about AI schemingRiskSchemingScheming—strategic AI deception during training—has transitioned from theoretical concern to observed behavior across all major frontier models (o1: 37% alignment faking, Claude: 14% harmful compli...Quality: 74/100 and deceptive alignment. This provision recognizes that advanced AI systems might develop goals misaligned with their developers’ intentions and take actions to evade oversight or control mechanisms.22
Autonomous Harmful Capabilities: SB 53 identifies risks from frontier models enabling “the commission of a criminal offense or a cyberattack, without meaningful human review,” addressing concerns about AI systems autonomously executing harmful actions. This encompasses both misuse risksMisuse RisksComprehensive analysis of 13 AI misuse cruxes with quantified evidence showing mixed uplift (RAND bio study found no significant difference, but cyber CTF scores improved 27%→87% in 4 months), deep...Quality: 65/100 where actors deliberately employ AI for harm, and accident scenarios where AI systems pursue harmful objectives without human intervention.23
Weapons Proliferation: The law specifically addresses risks of frontier models providing “expert-level assistance to people seeking to construct a chemical, biological, radiological, or nuclear weapon.” This provision reflects growing concerns within the AI safety community about advanced AI systems lowering barriers to developing weapons of mass destruction.24
Incident Reporting as Safety Intelligence: The mandatory incident reporting system serves multiple AI safety functions. By requiring developers to report unauthorized weight access, loss-of-control events, deceptive model behavior, and other critical incidents within 15 days (or 24 hours for imminent dangers), the law creates a structured mechanism for gathering empirical data about emerging AI risks. The Office of Emergency Services’ annual anonymized reports, beginning in 2027, will provide the AI safety research community with valuable information about the frequency and nature of concerning incidents.25
Transparency for Safety Research: The requirement for public safety frameworks enables external researchers, civil society organizations, and policymakers to evaluate developer approaches to catastrophic risk management. This transparency supports the broader AI safety ecosystem by allowing for critique, comparison, and identification of best practices across organizations.26
Criticisms and Controversies
Section titled “Criticisms and Controversies”Concessions from SB 1047
Section titled “Concessions from SB 1047”The most prominent criticism of SB 53 centers on the significant weakening of requirements compared to the vetoed SB 1047. The original bill included pre-training safety protocols, mandatory third-party audits, “kill switch” capabilities for shutting down dangerous models, 72-hour incident reporting deadlines, and penalties of up to 30% of compute costs. SB 53 eliminated all of these provisions, extending reporting deadlines to 15 days and capping fines at $1 million (with higher caps for repeat violations). Critics argue that these “considerable concessions to industry” may undermine the law’s effectiveness at preventing catastrophic outcomes.27
Limited Scope and High Thresholds
Section titled “Limited Scope and High Thresholds”The law’s high thresholds—models trained with more than 10²⁶ FLOPs by developers with annual revenues exceeding $500 million—limit its application to approximately 5-8 companies. This narrow scope excludes smaller AI developers and does not address downstream misuse of models or risks from systems that fall just below the thresholds. Some critics contend that focusing exclusively on the largest developers may miss important risks emerging from the broader AI ecosystem.28
Lack of Preventive Mechanisms
Section titled “Lack of Preventive Mechanisms”Unlike SB 1047, which would have required pre-deployment safety testing and given regulators authority to prevent deployment of dangerous models, SB 53 focuses primarily on transparency and post-incident reporting. The law mandates disclosure of risk management approaches but does not require developers to demonstrate that their safety measures are adequate before training or deploying frontier models. Critics argue this reactive approach—relying on disclosure and incident reporting rather than pre-deployment intervention—may prove insufficient when dealing with potentially irreversible catastrophic risks.29
Vague Definitions and Enforcement Challenges
Section titled “Vague Definitions and Enforcement Challenges”The law’s definitions of key terms like “critical safety incident” and “catastrophic risk” set high thresholds that may delay action until risks become severe. The reliance on self-reported safety frameworks without mandatory third-party audits raises questions about whether the Attorney General’s office will have sufficient resources and expertise to effectively evaluate compliance and enforce penalties.30
Preemption of Local Innovation
Section titled “Preemption of Local Innovation”SB 53 explicitly preempts city and county regulations on frontier AI catastrophic risk management adopted after January 1, 2025. While supporters argue this creates beneficial statewide uniformity, critics contend that preemption limits the ability of local jurisdictions to develop innovative approaches tailored to their communities’ specific needs and risk profiles.31
Implementation and Recent Developments
Section titled “Implementation and Recent Developments”Following the law’s signing on September 29, 2025, and its effective date of January 1, 2026, implementation is in early stages. As of February 2026, no enforcement actions, reported incidents, or published safety frameworks are documented in public sources. Key upcoming milestones include:
- Ongoing (2026-2027): Large frontier developers must publish and maintain their safety and security frameworks on public websites, with annual updates.32
- January 1, 2027: The CalCompute consortium must submit its framework report to the Legislature, after which the consortium dissolves.33
- January 1, 2027: The California Office of Emergency Services begins publishing annual anonymized reports summarizing critical safety incidents reported by developers and the public.34
The California Department of Technology is required to regularly review AI advancements and propose updates to the law to keep pace with technological evolution. This provision reflects recognition that frontier AI capabilities are advancing rapidly and that regulatory frameworks must adapt accordingly.35
Legal and policy analysts note that SB 53’s influence likely extends beyond California’s borders due to the concentration of major AI companies in the state—California houses 32 of the world’s top 50 AI companies and received more than 50% of global venture capital funding for AI/ML startups in 2024. Companies complying with SB 53’s requirements for their California operations may adopt similar practices nationwide, similar to how the California Consumer Privacy Act (CCPA) influenced data protection practices across the United States.36
Comparison to Other AI Governance Efforts
Section titled “Comparison to Other AI Governance Efforts”SB 53 sits within a broader landscape of emerging AI governance initiatives at state, federal, and international levels:
New York’s RAISE Act: Proposed legislation in New York takes a stricter enforcement approach than SB 53, with fines up to $10 million for first violations and $30 million for repeat violations, compared to SB 53’s $1 million cap. The RAISE Act also targets a broader set of companies, applying to those spending more than $100 million on compute rather than using a revenue-based threshold. However, SB 53 provides more detailed requirements for public safety frameworks.37
Federal Policy: SB 53 emerges against a backdrop of limited federal AI regulation. Supporters frame the law as California stepping in to fill a void left by federal inaction, while questions remain about potential conflicts between state and federal approaches. The law includes provisions deferring to stricter federal rules if they emerge.38
EU AI Act: The European Union’s AI Act sets a lower threshold of 10²⁵ FLOPs (compared to SB 53’s 10²⁶ FLOPs) and includes more comprehensive requirements for high-risk AI systems. However, the EU approach focuses more broadly on various applications of AI rather than specifically targeting frontier models and catastrophic risks.39
Key Uncertainties
Section titled “Key Uncertainties”Several important questions about SB 53’s implementation and effectiveness remain unresolved:
-
Adequacy of Transparency: Will public disclosure of safety frameworks, without mandatory third-party audits or pre-deployment testing, prove sufficient to prevent catastrophic risks from frontier AI systems?
-
Enforcement Capacity: Does the California Attorney General’s office have adequate resources, technical expertise, and authority to effectively evaluate developer compliance with safety framework requirements and enforce penalties when violations occur?
-
Threshold Appropriateness: As AI capabilities advance, will the 10²⁶ FLOP threshold and $500 million revenue requirement continue to capture the frontier models that pose the greatest catastrophic risks, or will dangerous capabilities emerge from systems that fall below these thresholds?
-
CalCompute Viability: Will the California Legislature appropriate funding to operationalize CalCompute, and if established, will the public computing cluster successfully democratize access to AI infrastructure in practice?
-
Federal Preemption: How will SB 53 interact with future federal AI legislation, and could federal preemption ultimately limit the law’s applicability?
-
Incident Reporting Effectiveness: Will the 15-day reporting deadline (and 24-hour deadline for imminent risks) enable sufficiently rapid response to emerging threats, and will the Office of Emergency Services develop the capacity to meaningfully analyze and respond to reported incidents?
-
Industry Adaptation: Will AI companies respond to SB 53 by genuinely improving their catastrophic risk management practices, or will compliance become a checkbox exercise with limited practical safety benefits?
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
California’s SB 53: The First Frontier AI Law Explained, Future of Privacy Forum ↩
-
Senate Bill 53: Transparency Requirements for Large Developers, California Workplace Law Blog ↩
-
California SB 53: Expanded Compliance Guide, Nelson Mullins ↩
-
What Is California’s AI Safety Law?, Brookings Institution ↩
-
California Assumes Role as Lead US Regulator of AI, Latham & Watkins ↩
-
Understanding California’s SB 53 Law for AI Governance, HITRUST Alliance ↩
-
California’s SB 53: The First Frontier AI Law Explained, Future of Privacy Forum ↩
-
Governor Newsom Signs Senator Wiener’s Landmark AI Law, California State Senate ↩
-
Transparency in Frontier AI Act: Standardized Disclosures, WilmerHale ↩
-
California Assumes Role as Lead US Regulator of AI, Latham & Watkins ↩
-
California’s SB 53: The First Frontier AI Law Explained, Future of Privacy Forum ↩
-
Understanding California’s SB 53 Law for AI Governance, HITRUST Alliance ↩
-
California’s SB 53: The First Frontier AI Law Explained, Future of Privacy Forum ↩
-
California Assumes Role as Lead US Regulator of AI, Latham & Watkins ↩
-
Senate Bill 53: Transparency Requirements for Large Developers, California Workplace Law Blog ↩
-
What Is California’s AI Safety Law?, Brookings Institution ↩
-
California’s SB 53: The First Frontier AI Law Explained, Future of Privacy Forum ↩
-
Senate Bill 53: Transparency Requirements for Large Developers, California Workplace Law Blog ↩
-
Transparency in Frontier AI Act: Standardized Disclosures, WilmerHale ↩
-
California Assumes Role as Lead US Regulator of AI, Latham & Watkins ↩
-
California Enacts SB 53: Responsible AI Governance, Shephard Mullin ↩
-
California’s SB 53: The First Frontier AI Law Explained, Future of Privacy Forum ↩
-
What Is California’s AI Safety Law?, Brookings Institution ↩
-
Transparency in Frontier AI Act: Standardized Disclosures, WilmerHale ↩