policy-stakeholder
Center for AI Safety on Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Child of Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Metadata
| Source Table | policy_stakeholders |
| Source ID | br7vCVZ1ME |
| Source URL | safesecureai.org/support |
| Parent | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act |
| Children | — |
| Created | Apr 15, 2026, 5:56 AM |
| Updated | Apr 15, 2026, 5:56 AM |
| Synced | Apr 15, 2026, 5:56 AM |
Record Data
id | br7vCVZ1ME |
policyEntityId | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act(policy) |
stakeholderEntityId | Center for AI Safety (CAIS)(organization) |
stakeholderDisplayName | Center for AI Safety |
position | support |
importance | high |
reason | Co-sponsored SB 1047 through its CAIS Action Fund; helped build a broad coalition including 70+ academic researchers, 120+ frontier AI company employees, unions, and advocacy organizations |
source | safesecureai.org/support |
context | [ "Funded by Open Philanthropy (~$15M total grants)", "Director Dan Hendrycks (also listed as supporter) co-authored the Statement on AI Risk", "Open Philanthropy also funds Anthropic (mixed position) and supported MIRI" ] |
Source Check Verdicts
unverifiable95% confidence
Last checked: 4/13/2026
The record claims 'Center for AI Safety (unknown)' is a stakeholder regarding SB 1047 policy. The source text contains extensive content about SB 1047 support, including letters and statements from multiple organizations and individuals, but does not mention Center for AI Safety anywhere in the provided excerpt. The source does not confirm or contradict the claim—it simply does not address whether Center for AI Safety is a stakeholder on this policy. This is unverifiable based on the given source material.
Debug info
Thing ID: br7vCVZ1ME
Source Table: policy_stakeholders
Source ID: br7vCVZ1ME
Parent Thing ID: sid_XcGTez1oFw