Skip to content
Longterm Wiki
Back

Center for AI Safety (CAIS) – Homepage

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Center for AI Safety

CAIS is one of the leading AI safety research organizations; this homepage provides an entry point to their research, public statements, and field-building initiatives relevant to anyone working in or entering AI safety.

Metadata

Importance: 62/100homepage

Summary

The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.

Key Points

  • CAIS conducts technical and conceptual research on AI safety, covering topics like robustness, alignment, and systemic risk.
  • The organization published a landmark statement warning that mitigating AI extinction risk should be a global priority, signed by hundreds of AI experts.
  • CAIS supports field-building through fellowships, educational resources, and career transition programs for researchers entering AI safety.
  • Their work spans multiple domains including technical safety research, AI ethics, philosophy, and societal implications of advanced AI.
  • CAIS serves as a hub for coordinating safety-focused researchers and communicating risks to policymakers and the broader public.

Review

The Center for AI Safety (CAIS) represents a critical initiative in addressing the emerging challenges of artificial intelligence by focusing on comprehensive risk mitigation strategies. Their approach is distinctive in its multidisciplinary perspective, combining technical research with conceptual explorations across domains like safety engineering, complex systems, international relations, and philosophy. CAIS's methodology involves creating foundational benchmarks, developing safety methods, and publishing accessible research that advances the understanding of AI risks. Their work spans technical research to develop safety protocols and conceptual research to explore broader societal implications. By offering resources like a compute cluster, philosophy fellowship, and public research, they aim to build a robust ecosystem of AI safety researchers and raise awareness about potential systemic risks associated with advanced AI technologies.

Cited by 27 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
Center for AI Safety (CAIS) 
 

 

 

 
 

 AI risk Resources
 Contact Careers Donate Resources
 AI Risk Contact Careers Donate Careers Donate 
 
 

 In contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. 

 Featured CAIS Work

 AI Safety Field-Building

 
 
 Featured ML Safety Infrastructure

 AI and Society Fellowship

 Applications close March 24.

 A three-month research program that investigates the societal impacts of advanced AI and the institutions and policies that could help societies respond well.

 Learn more 
 
 
 
 Philosophy Fellowship

 AI Safety, Ethics, & Society

 Training 1000+ future AI safety leaders

 The course offers a comprehensive introduction to how current AI systems work, their societal-scale risks, and how to manage them.

 Learn more 
 
 
 
 CAIS Compute Cluster

 Compute Cluster

 Enabling ML safety research at scale

 To support progress and innovation in AI safety, we offer researchers free access to our compute cluster, which can run and train large-scale AI systems.

 Learn more 
 
 See all work Dan Hendrycks

 Director, Center for AI Safety
PhD Computer Science, UC Berkeley

 "Preventing extreme risks from AI requires more than just technical work, so CAIS takes a multidisciplinary approach working across academic disciplines, public and private entities, and with the general public."

 Risks from AI

 Artificial Intelligence (AI) possesses the potential to benefit and advance society. Like any other powerful technology, AI also carries inherent risks, including some which are potentially catastrophic . 

 Current AI Systems

 Current systems already can pass the bar exam, write code, fold proteins, and even explain humor. Like any other powerful technology, AI also carries inherent risks, including some which are potentially catastrophic. 

 AI Safety

 As AI systems become more advanced and embedded in society, it becomes increasingly important to address and mitigate these risks. By prioritizing the development of safe and responsible AI practices, we can unlock the full potential of this technology for the benefit of humanity.

 AI risks overview Our Research

 We conduct impactful research aimed at improving the safety of AI systems.

 Technical Research

 At the Center for AI Safety, our research exclusively focuses on mitigating societal-scale risks posed by AI. As a technical research laboratory:

 We create foundational benchmarks and methods which lay the groundwork for the scientific community to address these technical challenges. 
 We ensure our work is public and accessible. We publish in top ML conferences and always release our datasets and code.
 Conceptual Research

 In addition to our technical research, we also explore the less formalized aspects of AI safety. 

 We purs

... (truncated, 4 KB total)
Resource ID: a306e0b63bdedbd5 | Stable ID: sid_wf8L3fxlGI