Skip to content
Longterm Wiki
policy-stakeholder

Geoffrey Hinton on Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

Metadata

Source Tablepolicy_stakeholders
Source IDiyhwab6hva
Source URLtime.com/7008947/california-ai-bill-letter/
ParentSafe and Secure Innovation for Frontier Artificial Intelligence Models Act
Children
CreatedMar 21, 2026, 1:30 AM
UpdatedMar 21, 2026, 3:12 PM
SyncedMar 21, 2026, 3:12 PM

Record Data

idiyhwab6hva
policyEntityIdSafe and Secure Innovation for Frontier Artificial Intelligence Models Act(policy)
stakeholderEntityIdGeoffrey Hinton(person)
stakeholderDisplayNameGeoffrey Hinton
positionsupport
importancehigh
reasonCo-authored August 7, 2024 expert letter calling SB 1047 a 'very sensible approach'; stated 'it's critical that we have legislation with real teeth to address the risks' and endorsed the AI employee support letter
sourcetime.com/7008947/california-ai-bill-letter/
context
[
  "Former Google VP; resigned May 2023 to speak freely about AI risks",
  "2024 Nobel Prize in Physics (with John Hopfield) for neural network foundations",
  "Co-authored foundational deep learning papers with Bengio (also supporter)",
  "Former advisor to Google DeepMind (which opposes the bill)…

Source Check Verdicts

confirmed95% confidence

Last checked: 4/9/2026

The record identifies Geoffrey Hinton as a stakeholder in the policy context of California's AI safety bill (SB 1047). The source text explicitly confirms Hinton's involvement as a co-author of the expert letter supporting this bill. The 'unknown' designation in the record appears to refer to an unknown affiliation or role, which is not contradicted by the source—the source simply identifies him as a renowned professor and Turing Award winner without specifying a current institutional affiliation in this excerpt. The core claim that Hinton is a stakeholder in this policy is directly confirmed.

Debug info

Thing ID: iyhwab6hva

Source Table: policy_stakeholders

Source ID: iyhwab6hva