Elicit on Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Child of Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Metadata
| Source Table | policy_stakeholders |
| Source ID | NitTemTj6T |
| Source URL | safesecureai.org/support |
| Parent | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act |
| Children | — |
| Created | Apr 15, 2026, 7:19 AM |
| Updated | Apr 15, 2026, 7:19 AM |
| Synced | Apr 15, 2026, 7:19 AM |
Record Data
id | NitTemTj6T |
policyEntityId | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act(policy) |
stakeholderEntityId | Elicit (AI Research Tool)(organization) |
stakeholderDisplayName | Elicit |
position | support |
importance | low |
reason | AI research assistant company that endorsed SB 1047, adding to the coalition of AI companies demonstrating that safety regulation is compatible with building useful AI products |
source | safesecureai.org/support |
context | — |
Source Check Verdicts
Last checked: 4/14/2026
The record claims 'Elicit (unknown)' is a stakeholder on the SB 1047 support page. The source text provided is the full content of the safesecureai.org/support page, which lists numerous supporters (Anthropic, Notion, Imbue, Geoffrey Hinton, Yoshua Bengio, Vitalik Buterin, Evan Conrad, Jack Shanahan, Andrew Weber, Elon Musk, Aidan Mclaughlin, and organizational supporters like AI Future, Encode Justice, and Economic Security California). The name 'Elicit' does not appear anywhere in this source material. Without evidence of Elicit being mentioned as a stakeholder in the provided source, the claim cannot be verified from this document.
Debug info
Thing ID: NitTemTj6T
Source Table: policy_stakeholders
Source ID: NitTemTj6T
Parent Thing ID: sid_XcGTez1oFw