Skip to content
Longterm Wiki
policy-stakeholder

Center for AI Safety on Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

Child of Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

Metadata

Source Tablepolicy_stakeholders
Source IDbr7vCVZ1ME
Source URLsafesecureai.org/support
ParentSafe and Secure Innovation for Frontier Artificial Intelligence Models Act
Children
CreatedApr 15, 2026, 5:56 AM
UpdatedApr 15, 2026, 5:56 AM
SyncedApr 15, 2026, 5:56 AM

Record Data

idbr7vCVZ1ME
policyEntityIdSafe and Secure Innovation for Frontier Artificial Intelligence Models Act(policy)
stakeholderEntityIdCenter for AI Safety (CAIS)(organization)
stakeholderDisplayNameCenter for AI Safety
positionsupport
importancehigh
reasonCo-sponsored SB 1047 through its CAIS Action Fund; helped build a broad coalition including 70+ academic researchers, 120+ frontier AI company employees, unions, and advocacy organizations
sourcesafesecureai.org/support
context
[
  "Funded by Open Philanthropy (~$15M total grants)",
  "Director Dan Hendrycks (also listed as supporter) co-authored the Statement on AI Risk",
  "Open Philanthropy also funds Anthropic (mixed position) and supported MIRI"
]

Source Check Verdicts

unverifiable95% confidence

Last checked: 4/13/2026

The record claims 'Center for AI Safety (unknown)' is a stakeholder regarding SB 1047 policy. The source text contains extensive content about SB 1047 support, including letters and statements from multiple organizations and individuals, but does not mention Center for AI Safety anywhere in the provided excerpt. The source does not confirm or contradict the claim—it simply does not address whether Center for AI Safety is a stakeholder on this policy. This is unverifiable based on the given source material.

Debug info

Thing ID: br7vCVZ1ME

Source Table: policy_stakeholders

Source ID: br7vCVZ1ME

Parent Thing ID: sid_XcGTez1oFw