Skip to content
Longterm Wiki
Back

Center for AI Safety – Wikipedia.

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Useful background reference on one of the prominent AI safety organizations; helpful for understanding the institutional landscape and the 2023 extinction risk statement that attracted significant media and policy attention.

Metadata

Importance: 55/100wiki pagereference

Summary

Wikipedia's overview of the Center for AI Safety (CAIS), a nonprofit organization focused on reducing societal-scale risks from advanced AI systems. CAIS is known for publishing the 2023 statement on AI extinction risk signed by hundreds of leading AI researchers and for conducting technical safety research. The article covers the organization's founding, mission, key initiatives, and notable figures involved.

Key Points

  • CAIS is a nonprofit founded by Dan Hendrycks focused on reducing catastrophic and existential risks from AI systems.
  • Published the widely-signed 2023 'Statement on AI Risk' warning of extinction-level risks, co-signed by prominent AI researchers including Hinton and Bengio.
  • Conducts and funds technical AI safety research, including work on robustness, evaluation, and safety benchmarks.
  • Offers educational resources including an AI safety course and supports the broader AI safety research community.
  • Represents an institutionalized effort to mainstream AI existential risk concerns within the academic and policy communities.

Cited by 2 pages

PageTypeQuality
Center for AI Safety (CAIS)Organization42.0
Dan HendrycksPerson19.0

Cached Content Preview

HTTP 200Fetched Apr 8, 20268 KB
Center for AI Safety - Wikipedia 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Jump to content 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 From Wikipedia, the free encyclopedia 
 
 
 
 
 
 American AI safety research center 
 

 Center for AI Safety Formation 2022 &#59; 4 years ago  ( 2022 ) Founders Dan Hendrycks 
 Oliver Zhang
 Headquarters San Francisco , California , US Director Dan Hendrycks Website safe .ai 
 The Center for AI Safety ( CAIS ) is an American nonprofit organization based in San Francisco that promotes the safe development and deployment of artificial intelligence . CAIS' work encompasses research in technical AI safety and AI ethics , advocacy, and support to grow the AI safety research field. [ 1 ] [ 2 ] It was founded in 2022 by Dan Hendrycks and Oliver Zhang. [ 3 ] 

 In May 2023, CAIS published the statement on AI risk of extinction signed by hundreds of professors of AI, leaders of major AI companies, and other public figures. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] 

 
 Research

 [ edit ] 
 CAIS researchers published "An Overview of Catastrophic AI Risks", which details risk scenarios and risk mitigation strategies. Risks described include the use of AI in autonomous warfare or for engineering pandemics, as well as AI capabilities for deception and hacking . [ 9 ] [ 10 ] Another work, conducted in collaboration with researchers at Carnegie Mellon University , described an automated way to discover adversarial attacks of large language models , that bypass safety measures, highlighting the inadequacy of current safety systems. [ 11 ] [ 12 ] 

 Activities

 [ edit ] 
 In 2023, CAIS' statement on AI risk of extinction gained the support of Anthropic ’s Dario Amodei , OpenAI ’s Sam Altman , and other AI industry leaders. [ 13 ] [ 14 ] That same year, the cryptocurrency exchange FTX (which went bankrupt in November 2022 ) attempted to recoup $6.5 million that it had donated to CAIS earlier that year. [ 15 ] [ 16 ] 

 Other initiatives include a compute cluster to support AI safety research, an online course titled "Intro to ML Safety", and a fellowship for philosophy professors to address conceptual problems. [ 10 ] The Center for AI Safety Action Fund is a sponsor of the California bill SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act . [ 17 ] 

 See also

 [ edit ] 
 AI safety 

 Center for Human-Compatible AI 
 
 References

 [ edit ] 
 
 ^ "AI poses risk of extinction, tech leaders warn in open letter. Here's why alarm is spreading" . USA TODAY . May 31, 2023. 

 ^ "Our Mission | CAIS" . www.safe.ai . Retrieved April 13, 2023 . 

 ^ Edwards, Benj (May 30, 2023). "OpenAI execs warn of "risk of extinction" from artificial intelligence in new open letter" . Ars Technica . 

 ^ Cent

... (truncated, 8 KB total)
Resource ID: 0c57ac12fb1e760b | Stable ID: sid_tc0S4oQbrQ