Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Center for AI Safety

SAFE (safe.ai) is a key institutional player in the AI safety ecosystem, known for convening researchers and publishing the 2023 AI risk statement; this page serves as an entry point to their work and team.

Metadata

Importance: 55/100homepage

Summary

The Center for AI Safety (SAFE) is a nonprofit organization focused on reducing societal-scale risks from advanced AI systems. The about page outlines their mission, team, and core research and advocacy activities aimed at ensuring AI development benefits humanity. They work across technical safety research, policy engagement, and public education.

Key Points

  • SAFE is a nonprofit dedicated to reducing large-scale risks posed by advanced AI systems through research and advocacy.
  • The organization engages in technical AI safety research, policy work, and public awareness efforts.
  • SAFE produced the widely-cited 2023 statement on AI extinction risk signed by hundreds of AI researchers and experts.
  • The center supports a broader ecosystem of AI safety researchers through grants, fellowships, and collaborative programs.
  • SAFE occupies an important role bridging academic AI safety research and mainstream policy and public discourse.

Cited by 1 page

PageTypeQuality
Center for AI SafetyOrganization42.0

3 FactBase facts citing this source

Cached Content Preview

HTTP 200Fetched Apr 7, 20264 KB
About Us | CAIS 
 

 

 

 
 AI risk Resources
 Contact Careers Donate Resources
 AI Risk Contact Careers Donate Careers Donate 
 
 

 Why we exist 

 CAIS exists to ensure the safe development and deployment of AI 

 AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.

 What we do 

 AI safety is highly neglected. CAIS reduces societal-scale risks from AI through research, field-building, and advocacy.

 Research 

 CAIS conducts research solely focused on improving the safety of AIs. Through our research initiatives, we aim to identify and address AI safety issues before they become significant concerns.

 Activities: 

 Identifying and removing dangerous behaviors in AIs
 Studying deceptive, machiavellian, and other unethical behavior in AIs
 Training AIs to behave morally
 Improving the reliability of AIs
 Improving the security of AIs
 Field-building

 CAIS grows the AI safety research field through funding, research infrastructure, and educational resources. We aim to create a thriving research ecosystem that will drive progress towards safe AI.

 Activities: 

 Providing top researchers with compute and technical infrastructure
 Running multidisciplinary fellowships focused on AI safety 
 Interfacing with the global research community
 Running competitions and workshops
 Creating educational materials
 Advocacy

 CAIS advises industry leaders, policymakers, and other labs to bring AI safety research into the real-world. We aim to build awareness and establish guidelines for the safe and responsible deployment of AI.

 Activities: 

 Raising public awareness of AI risks and safety
 Providing technical expertise to inform policymaking at governmental bodies
 Advising industry leaders on structures and practices to prioritize AI safety
 CAIS Impact 

 CAIS is accelerating research on AI safety and raising the profile of AI safety in public discussions. Here are some highlights from our work so far:

 1

 Global Statement on AI Risk signed by 600 leading AI researchers and public figures 100

 AI safety researchers using CAIS’ cutting-edge computing infrastructure 170

 AI safety research papers produced across our programs 500

 Students trained in AI safety 500

 Over 500 machine learning researchers taking part in AI safety events 1,200

 Submissions from over 70 teams to our AI safety research competition Our Approach 

 We systematically assess our projects so we can quickly scale what works and stop what doesn’t.

 1. Prioritize

 Prioritize by estimating the expected impact of each project. ↴

 2. Pilot

 Pilot the top projects to a point where impact can be assessed  ↴

 3. Evaluate

 Evaluate th

... (truncated, 4 KB total)
Resource ID: kb-cf6c0895df42bac5