Skip to content
Longterm Wiki
Back

AI Safety for Everyone review

paper

Authors

Balint Gyevnar·Atoosa Kasirzadeh

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A 2025 paper from University of Edinburgh and CMU that challenges x-risk-centric narratives in AI safety discourse, relevant for understanding definitional debates and inclusivity issues within the AI safety research community.

Paper Details

Citations
10
Year
2025
Methodology
peer-reviewed
Categories
Nature Machine Intelligence

Metadata

Importance: 62/100arxiv preprintanalysis

Abstract

Recent discussions and research in AI safety have increasingly emphasized the deep connection between AI safety and existential risk from advanced AI systems, suggesting that work on AI safety necessarily entails serious consideration of potential existential threats. However, this framing has three potential drawbacks: it may exclude researchers and practitioners who are committed to AI safety but approach the field from different angles; it could lead the public to mistakenly view AI safety as focused solely on existential scenarios rather than addressing a wide spectrum of safety challenges; and it risks creating resistance to safety measures among those who disagree with predictions of existential AI risks. Through a systematic literature review of primarily peer-reviewed research, we find a vast array of concrete safety work that addresses immediate and practical concerns with current AI systems. This includes crucial areas like adversarial robustness and interpretability, highlighting how AI safety research naturally extends existing technological and systems safety concerns and practices. Our findings suggest the need for an epistemically inclusive and pluralistic conception of AI safety that can accommodate the full range of safety considerations, motivations, and perspectives that currently shape the field.

Summary

This paper argues against framing AI safety primarily through existential risk, conducting a systematic literature review to show the field encompasses diverse practical work on current system vulnerabilities. The authors contend that existential-risk-centric framing excludes researchers, misleads the public, and creates resistance to safety measures, advocating instead for an epistemically inclusive and pluralistic conception of AI safety.

Key Points

  • Overemphasizing existential risk framing may exclude safety researchers with different motivations and create public misconceptions about AI safety's scope.
  • Systematic literature review reveals AI safety encompasses adversarial robustness, interpretability, and other practical work on current systems.
  • AI safety research naturally extends traditional technological and systems safety practices rather than being solely a futurist concern.
  • The existential risk framing has ties to specific normative movements (rationalism, EA, longtermism), which may alienate mainstream researchers.
  • Authors advocate for a pluralistic conception of AI safety accommodating the full range of motivations, perspectives, and safety challenges.

Cited by 1 page

PageTypeQuality
Corrigibility FailureRisk62.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202675 KB
AI Safety for Everyone 
 
 
 
 
 
 

 
 

 
 
 
 
 
 \equalcont 
 These authors contributed equally to this work.
[2] \fnm Atoosa \sur Kasirzadeh
 \equalcont These authors contributed equally to this work.

 
 
 1] \orgdiv School of Informatics, \orgname University of Edinburgh, \orgaddress \street 10 Crichton Street, \city Edinburgh, \postcode EH8 9AB, \country United Kingdom

 
 
 [2] \orgdiv Departments of Philosophy & Software and Societal Systems, \orgname Carnegie Mellon University, \orgaddress \street Baker Hall 161
5000 Forbes Avenue, \city Pittsburgh, \postcode 15213, \country United States

 
 AI Safety for Everyone

 
 
 \fnm Balint \sur Gyevnar
 
 balint.gyevnar@ed.ac.uk 
 
    
 
 
 atoosa.kasirzadeh@gmail.com 
 
 [
 
 *
 
 
 
 Abstract

 Recent discussions and research in AI safety have increasingly emphasized the deep connection between AI safety and existential risk from advanced AI systems, suggesting that work on AI safety necessarily entails serious consideration of potential existential threats. However, this framing has three potential drawbacks: it may exclude researchers and practitioners who are committed to AI safety but approach the field from different angles; it could lead the public to mistakenly view AI safety as focused solely on existential scenarios rather than addressing a wide spectrum of safety challenges; and it risks creating resistance to safety measures among those who disagree with predictions of existential AI risks. Through a systematic literature review of primarily peer-reviewed research, we find a vast array of concrete safety work that addresses immediate and practical concerns with current AI systems. This includes crucial areas like adversarial robustness and interpretability, highlighting how AI safety research naturally extends existing technological and systems safety concerns and practices. Our findings suggest the need for an epistemically inclusive and pluralistic conception of AI safety that can accommodate the full range of safety considerations, motivations, and perspectives that currently shape the field.

 
 
 keywords: 

AI Safety, AI Ethics, Safe AI, AI Governance
 
 
 
 1 Introduction

 
 The rapid development and deployment of AI systems has made questions of safety increasingly urgent, demanding immediate attention from policymakers and governance bodies as they face critical decisions about regulatory frameworks, liability standards, and safety certification requirements for AI systems. Recent discourse has narrowed to focus primarily on AI safety as a project of minimizing existential risks from future advanced AI systems [ 1 , 2 , 3 ] . 1 1 1 The focus on existential risks from AI has been particularly connected to normative theories and movements such as rationalism, effective altruism, or longtermism [ 5 , 6 ] . While these theories offer valuable perspectives on long-term challenges, their specific institutional articulations have faced substantial criticism [ 7 , 8 , 9 ] . As

... (truncated, 75 KB total)
Resource ID: 2350940574257648 | Stable ID: sid_iaIuFlmZdA