policy-stakeholder
Elicit on Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Metadata
| Source Table | policy_stakeholders |
| Source ID | y2vmdmpced |
| Source URL | safesecureai.org/support |
| Parent | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act |
| Children | — |
| Created | Mar 21, 2026, 1:30 AM |
| Updated | Mar 21, 2026, 3:12 PM |
| Synced | Mar 21, 2026, 3:12 PM |
Record Data
id | y2vmdmpced |
policyEntityId | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act(policy) |
stakeholderEntityId | Elicit (AI Research Tool)(organization) |
stakeholderDisplayName | Elicit |
position | support |
importance | low |
reason | AI research assistant company that endorsed SB 1047, adding to the coalition of AI companies demonstrating that safety regulation is compatible with building useful AI products |
source | safesecureai.org/support |
context | — |
Source Check Verdicts
unverifiable95% confidence
Last checked: 4/14/2026
The record claims 'Elicit (unknown)' is a stakeholder on the SB 1047 support page. The source text provided is the full content of the 'Support' page for SB 1047, which lists numerous supporters including Anthropic, Notion, Imbue, Geoffrey Hinton, Yoshua Bengio, Vitalik Buterin, Evan Conrad, Lieutenant General Jack Shanahan, Andrew C. Weber, Elon Musk, Aidan Mclaughlin, and organizational supporters like AI Future, Encode Justice, and Economic Security California. The organization 'Elicit' is not mentioned anywhere in this source material. Without evidence of Elicit appearing on this page, the claim cannot be verified from the provided source.
Debug info
Thing ID: y2vmdmpced
Source Table: policy_stakeholders
Source ID: y2vmdmpced