New York RAISE Act
New York RAISE Act
The New York RAISE Act represents the first comprehensive state-level AI safety legislation with enforceable requirements for frontier AI developers, establishing mandatory safety protocols, incident reporting, and third-party audits. While significantly weakened from its original form through amendments, it creates important precedent for state AI regulation and provides actionable compliance frameworks for major AI companies.
Quick Assessment
| Dimension | Assessment |
|---|---|
| Type | State legislation regulating frontier AI development |
| Status | Signed December 19, 2025; effective January 1, 2027 |
| Scope | Large developers of frontier models ($100M+ compute spend) |
| Key Mechanism | Mandatory safety protocols, third-party audits, incident reporting |
| Enforcement | NY Attorney General; $1M-$3M civil penalties |
| Similar Initiatives | California's Transparency in Frontier AI Act (TFAIA) |
Key Links
| Source | Link |
|---|---|
| Official Website | nyassembly.gov |
Overview
The New York Responsible Artificial Intelligence Safety and Education (RAISE) Act (S6953B/A6453B) is state legislation signed by Governor Kathy Hochul on December 19, 2025, that establishes comprehensive safety and transparency requirements for developers of frontier AI models.1 The law takes effect January 1, 2027, and represents one of the first state-level attempts to mandate enforceable safety measures for the most powerful AI systems.
The Act applies specifically to "large developers" training frontier models—defined as AI systems trained with over $100 million in compute resources or 10²⁶+ FLOPs.2 It requires these developers to develop written safety protocols before deployment, conduct annual third-party audits, and report safety incidents to state authorities within 72 hours.3
The legislation emerged from bipartisan concern about AI risks such as biological weapon design assistance, self-replication, deception, automated crime, and model theft, amid what legislators described as a lack of adequate federal regulation.4 After passing the New York State Legislature in June 2025 with overwhelming support (backed by 84% of New Yorkers according to sponsors), the bill was amended to align more closely with California's TFAIA before being signed into law.5
Legislative History
Origins and Sponsorship
The RAISE Act was sponsored by State Senator Andrew Gounardes and Assemblymember Alex Bores, who introduced the legislation to address safety risks from frontier AI models.6 The sponsors emphasized that the bill targeted only the largest AI developers—those spending over $100 million on training—without stifling innovation from smaller companies or startups.7
Assemblymember Bores highlighted the strong public support for the legislation, noting that 84% of New Yorkers backed the commonsense safeguards and that AI safety experts had been calling urgently for such regulation.8 Senator Gounardes framed the bill as prioritizing safety over Big Tech profits while still enabling AI innovation.9
Legislative Process and Amendments
The bill was introduced in early 2025 and passed the New York State Legislature in June 2025 with overwhelming bipartisan support.10 However, the original version contained significantly stronger provisions and penalties than what was ultimately enacted.
The initial legislative version included:
- Civil penalties of up to $10 million for first violations and $30 million for subsequent violations
- A deployment prohibition that would have barred models posing "unreasonable risk of critical harm"
- Stricter compliance requirements11
Following the bill's passage, Governor Hochul negotiated amendments with legislative sponsors to reduce the regulatory burden and align New York's approach with California's recently enacted TFAIA (SB 53), which was signed into law in September 2025.12 The final amended version scaled back penalties to $1 million for first violations and $3 million for subsequent violations, removed the deployment ban, and shifted focus more toward transparency and reporting rather than pre-deployment prohibitions.13
Governor Hochul signed the amended RAISE Act into law on December 19, 2025, calling it "nation-leading legislation" that establishes a strong and sensible standard for AI transparency and safety amid federal inaction.14
Key Requirements
Covered Entities
The RAISE Act applies to "large developers" of frontier AI models, defined through two primary thresholds:
- Compute threshold: Models trained using more than 10²⁶ floating point operations (FLOPs) with aggregate compute costs exceeding $100 million15
- Revenue threshold: Developers with annual revenue exceeding $500 million (added in post-passage amendments)16
The law explicitly exempts accredited universities from compliance requirements, focusing enforcement exclusively on commercial AI developers.17
Safety and Security Protocols
Large developers must develop, publish, and continuously maintain written safety and security protocols before deploying frontier models.18 These protocols must address:
- Risk identification and mitigation for "critical harm"—defined as incidents causing death or serious bodily injury to 100+ people, damage exceeding $1 billion, assistance in creating weapons of mass destruction, or autonomous dangerous behavior19
- Cybersecurity measures to protect models from theft, unauthorized access, or model escape
- Testing procedures to evaluate model capabilities and potential risks, including self-replication, deception, biological weapon design assistance, and large-scale automated criminal activity20
- Internal governance structures, including designation of a senior compliance officer responsible for protocol implementation21
Developers must publish their safety protocols with appropriate redactions for trade secrets and privacy concerns, while providing full access to the New York Attorney General and Division of Homeland Security and Emergency Services.22
Annual Reviews and Audits
The Act mandates that large developers conduct:
- Annual safety reviews of their protocols, updating them as needed based on new risks or capabilities
- Independent third-party audits to verify compliance with safety requirements and assess the effectiveness of risk mitigation measures23
These ongoing evaluation requirements are intended to ensure that safety measures evolve alongside rapidly advancing AI capabilities.
Incident Reporting
Developers must report "safety incidents" relating to frontier models to the New York attorney general and DHSES within 72 hours of discovery.24 Reportable incidents include examples such as unauthorized access, model misuse, and critical control failures. Developers must also report cases where they reasonably believe an incident has occurred.24
The RAISE Act's legislative memo outlined key concerns that informed these provisions, including testing that revealed models attempting self-replication and deception, risks related to biological weapon design assistance, and industry concerns about the lack of federal regulation.25
Reports must include the date of the incident, the reasons the incident qualifies as a safety incident, and a short and plain statement describing what occurred.26
Prohibition on High-Risk Deployment
While the final version removed the original deployment ban, developers are still prohibited from deploying models that pose an "unreasonable risk of critical harm" based on their testing and safety evaluations.27 The law also bans the use of knowledge distillation techniques to create smaller models that mimic the dangerous capabilities of larger frontier models.28
Enforcement and Oversight
Department of Financial Services Oversight Office
The RAISE Act creates a new oversight office within the New York Department of Financial Services (NYDFS) to implement and enforce the legislation.29 This office is responsible for:
- Evaluating large developers and maintaining a public list of entities subject to the law
- Assessing fees on covered developers to fund oversight activities
- Issuing regulations and guidance on compliance requirements
- Publishing annual reports on AI safety in New York, including information about incidents, compliance, and emerging risks
- Broad rulemaking authority to require additional disclosures or safety measures as AI technology evolves30
The choice of NYDFS reflects the department's established expertise in cybersecurity enforcement, particularly through its aggressive implementation of Part 500 cybersecurity regulations for financial institutions.31
Attorney General Enforcement
The New York Attorney General has exclusive enforcement authority under the RAISE Act, with no private right of action for individuals or organizations.32 The AG can:
- Bring civil actions against non-compliant developers for failing to comply with reporting obligations or for making false statements
- Seek injunctive relief to prevent deployment of dangerous models
- Impose civil penalties of up to $1 million for first violations and $3 million for subsequent violations33
Developers may defend against enforcement actions by demonstrating that critical harm was caused by third-party misuse rather than inherent model deficiencies.33
Whistleblower Protections
The Act includes whistleblower protections for employees who report safety concerns or violations to state authorities.34 It also voids contractual provisions that would shift liability away from developers or attempt to structure corporate entities in bad faith to evade the law's requirements, allowing courts to pierce the corporate veil in such cases.35
Relationship to AI Safety
The RAISE Act directly addresses several core concerns in AI safety research and policy:
Catastrophic Risk Mitigation
The legislation's focus on "critical harm"—including biological weapons, large-scale damage, and autonomous dangerous behavior—aligns with long-standing concerns about catastrophic risks from advanced AI systems.36 By requiring developers to proactively assess and mitigate risks before deployment, the law attempts to prevent scenarios where AI capabilities enable unprecedented harm.
The Act's specific mention of risks like self-replication and deception reflects emerging technical concerns about AI systems that could resist human control or pursue goals contrary to human values.37 Legislative memos supporting the bill cited industry testing that revealed models exhibiting these concerning capabilities, providing empirical justification for regulatory intervention.38
Transparency and Accountability
By mandating publication of safety protocols and requiring incident reporting, the RAISE Act addresses the opacity problem in frontier AI development.39 Many leading AI companies had made voluntary commitments to safety practices, but the law makes these commitments legally enforceable and subject to independent verification through third-party audits.40
The transparency requirements enable state authorities, researchers, and the public to better understand what safety measures are actually being implemented by frontier AI developers, rather than relying solely on corporate assurances.
Limitations for Alignment Research
While the RAISE Act establishes important safety requirements, it does not directly fund or mandate technical AI alignment research. The law requires developers to implement transparency and disclosure requirements, including making their safety and security protocols measures available to relevant authorities, and conduct annual safety reviews and independent third-party audits. However, it does not specify particular technical approaches to ensuring AI systems behave safely and in accordance with human values.41
The Act's effectiveness therefore depends substantially on the state of the art in AI safety research—if effective methods for preventing catastrophic AI risks do not exist or remain uncertain, compliance with the law's procedural requirements may not guarantee safety outcomes.
Comparison to California's TFAIA
The RAISE Act was explicitly amended to align with California's Transparency in Frontier AI Act (TFAIA, formerly SB 53), which was enacted in September 2025.42 Both laws target frontier AI developers with similar transparency and reporting requirements, but with notable differences:
Similarities
- Both apply to developers of frontier models based on compute thresholds (though exact definitions vary)
- Both require written safety protocols and incident reporting
- Both establish civil penalties for non-compliance (California caps at $1 million per violation)43
- Both are enforced by state attorneys general with no private right of action
Key Differences
- Reporting timeline: New York requires 72-hour incident reporting, while California allows 15 days for general incidents and 24 hours for imminent harm44
- Oversight structure: New York creates a dedicated office in the Department of Financial Services with broad rulemaking authority, while California has a different implementation structure45
- Revenue threshold: New York's final amendments included a $500 million revenue threshold not present in early versions46
The amendments to bring New York's law closer to California's approach reflect a stated goal of creating a "unified benchmark" among major technology states, rather than imposing conflicting requirements on AI developers.47
Criticisms and Controversies
Industry Concerns
The RAISE Act's focus on regulating the development process rather than post-deployment harms has drawn criticism from some industry groups and commentators. Critics have compared it to California's failed SB 1047 (vetoed by Governor Newsom in September 2024), arguing that mandating pre-deployment safety protocols, audits, and testing imposes high compliance burdens on AI companies without proven safety benefits.48
Some critics contend that attempting to regulate transparency, safety, and liability in a single framework creates a problematic concentration of authority in a single regulator (the Department of Financial Services).49 They argue this approach lacks the specialization and nuance needed for effective AI governance.
Weakening Through Amendments
AI safety advocates and some legislators viewed the post-passage amendments as significantly weakening the law's effectiveness. The removal of the deployment ban for high-risk models and the reduction of penalties from $10 million/$30 million to $1 million/$3 million were seen as industry-influenced concessions that reduced the law's deterrent effect.50
The shift in focus from prohibition to transparency and reporting led some supporters of the original bill to characterize the final version as more of a disclosure regime than a robust safety framework.51
Federal Preemption Concerns
The RAISE Act was signed just days after President Trump issued a December 2025 executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," which challenged state AI laws and called for federal preeminence in AI regulation.52 This raises questions about whether New York's law could face federal legal challenges on preemption grounds.
Legal analysts have noted that the Act may also face First Amendment challenges based on "compelled speech" doctrines, as it requires developers to publish information about their safety protocols and practices.53 The ultimate constitutional status of these requirements remains uncertain pending potential litigation.
Limited Scope
By focusing on the largest AI companies that have spent over $100 million in computational resources to train advanced AI models, the RAISE Act targets only the most urgent, severe risks.54 This means many AI systems outside these thresholds are not covered by the law's requirements, and dangerous capabilities could potentially emerge from smaller models, fine-tuned systems, or open-source projects that fall below the law's criteria.
Additionally, the law does not address risks from AI deployment and use by entities other than the original developers, potentially creating gaps in coverage for scenarios where critical harm arises from downstream applications.
Implementation and Timeline
The RAISE Act takes effect January 1, 2027, giving developers approximately one year from the signing date to establish compliance programs.55 Legal analysts have advised companies potentially subject to the law to begin preparation immediately, including:
- Reviewing existing AI governance structures and safety practices
- Establishing cross-functional teams spanning legal, technical development, and incident response functions
- Developing protocols for the 72-hour incident reporting requirement
- Identifying which models meet the frontier model definition and compute thresholds56
A DFS oversight office is expected to be established through chapter amendments to evaluate large frontier developers and promote transparency, with those amendments set to be enacted in January 2026 to clarify compliance expectations before the effective date.57 As of early 2026, the final text incorporating all chapter amendments had not yet been fully published, creating some uncertainty about precise requirements.58
Significance for AI Policy
The RAISE Act positions New York as the second state after California to enact comprehensive frontier AI safety legislation, establishing what supporters characterize as a "unified benchmark" for AI regulation among major technology states.59 In the absence of federal legislation specifically addressing catastrophic AI risks, state-level efforts like the RAISE Act represent the primary governance framework for frontier AI development in the United States.
The law's enactment demonstrates that bipartisan legislative support exists for AI safety regulation, at least at the state level, despite industry lobbying and concerns about economic competitiveness.60 The strong public support noted by sponsors (84% of New Yorkers) suggests that AI risk concerns resonate with voters beyond the AI safety research community.61
Whether the RAISE Act effectively reduces catastrophic AI risks will depend on multiple factors: the quality of safety protocols developers implement, the rigor of third-party audits, the enforcement priorities and resources of the Attorney General and oversight office, and ultimately whether the current state of AI safety research provides adequate methods for preventing the critical harms the law seeks to address.
Key Uncertainties
Several important questions about the RAISE Act remain unresolved:
- Will federal preemption challenges succeed? The relationship between state AI safety laws and federal authority remains legally uncertain, particularly following the December 2025 executive order.
- How will "unreasonable risk of critical harm" be interpreted? The law's prohibition on deploying high-risk models depends on this undefined standard, which may be clarified through regulatory guidance or enforcement actions.
- Will other states follow suit? If New York and California's approach becomes a template for other states, AI developers could face a complex patchwork of requirements; alternatively, state coordination could create de facto national standards.
- Can third-party auditors effectively assess frontier AI risks? The law assumes independent auditors can meaningfully evaluate cutting-edge AI systems for catastrophic risks, but this capability may not currently exist at scale.
- What enforcement priorities will emerge? With limited resources and many potential areas of focus, the Attorney General's enforcement decisions will substantially shape the law's practical impact.
Sources
Footnotes
-
Citation rc-15c0 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-b588 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-06d2 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-7b34 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-029b (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-8788 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-e438 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-9b58 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-04d3 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-ee32 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-267a (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-8a9c (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-870f (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-f92c (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-4a80 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-14dc (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-91a3 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-b603 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-7a5e (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-0b6d (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-927f (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-2fad (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-40cd (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-f3fa (data unavailable — rebuild with wiki-server access) ↩ ↩2
-
Citation rc-0f65 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-9c90 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-c375 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-9cc1 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-0f4a (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-bb74 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-a2b5 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-6687 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-9c4a (data unavailable — rebuild with wiki-server access) ↩ ↩2
-
Citation rc-de97 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-7d69 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-07d6 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-9f94 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-a389 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-9b09 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-8a77 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-f79b (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-94c8 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-6312 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-61ed (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-1974 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-a2b7 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-577d (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-2668 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-7cd7 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-6eec (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-069b (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-2f82 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-eea5 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-be9f (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-376d (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-a8a8 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-c6b0 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-d8e1 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-9257 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-d2d2 (data unavailable — rebuild with wiki-server access) ↩
-
Citation rc-4d77 (data unavailable — rebuild with wiki-server access) ↩