Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Data Status

Not fetched

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Feb 25, 2026236 KB
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - Wikipedia Jump to content From Wikipedia, the free encyclopedia California bill Safe and Secure Innovation for Frontier Artificial Intelligence Models Act California State Legislature Full name Safe and Secure Innovation for Frontier Artificial Intelligence Models Act Introduced February 7, 2024 Assembly voted August 28, 2024 (48–16) Senate voted August 29, 2024 (30–9) Sponsor Scott Wiener Governor Gavin Newsom Bill SB 1047 Website Bill Text Status: Not passed (Vetoed by Governor on September 29, 2024) The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act , or SB 1047 , was a failed [ 1 ] 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". [ 2 ] Specifically, the bill would have applied to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 10 26 integer or floating-point operations. [ 3 ] SB 1047 would have applied to all AI companies doing business in California—the location of the company would not matter. [ 4 ] The bill would have created protections for whistleblowers [ 5 ] and required developers to perform risk assessments of their models prior to release, with guidance from the Government Operations Agency . It would also have established CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups. Background [ edit ] The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to become concerned about the existential risks associated with increasingly powerful AI systems . [ 6 ] [ 7 ] Hundreds of tech executives and AI researchers, including two of the so-called "Godfathers of AI", Geoffrey Hinton and Yoshua Bengio , signed a statement in May 2023 calling for the mitigation of the "risk of extinction from AI" to be a global priority alongside "pandemics and nuclear war". [ 8 ] However, the plausibility of these risks is still widely debated. [ 9 ] Strong regulation of AI has been criticized for purportedly causing regulatory capture by large AI companies like OpenAI , a phenomenon in which regulation advances the interest of larger companies at the expense of smaller competition and the public in general, [ 7 ] although OpenAI ended up opposing the bill. [ 10 ] Other advocates of AI regulation aim to prevent bias and privacy violations, rather than existential risks. [ 7 ] For example, some experts who view existential concerns as overblown and unrealistic view them as a distraction from near-term harms of AI like discriminatory automated decision making. [ 11 ] In the face of existential concerns, technology companies have made

... (truncated, 236 KB total)
Resource ID: 9607d725074dfe2e | Stable ID: N2M3MjZkMD