Skip to content
Longterm Wiki
Back

Frontier AI Safety Commitments

web

Section 1.3 of Dan Hendrycks' open-access AI safety textbook; suitable for foundational learning about how competitive dynamics between states and corporations contribute to catastrophic AI risk scenarios.

Metadata

Importance: 62/100book chaptereducational

Summary

This textbook chapter from the CAIS 'Introduction to AI Safety, Ethics and Society' covers competitive AI race dynamics, including military AI arms races (lethal autonomous weapons, cyberwarfare), corporate races where economic competition undercuts safety, and evolutionary pressures that favor unsafe AI development. It examines how competitive pressures between states and corporations can lead to catastrophic outcomes.

Key Points

  • Military AI arms races risk catastrophic outcomes, including potential use of lethal autonomous weapons and AI-enabled cyberwarfare at unprecedented scale.
  • Nations may rationally risk extinction-level escalation rather than accept individual strategic defeat, creating dangerous incentive structures.
  • Corporate AI competition creates pressure to deprioritize safety in favor of speed, creating systemic risks across the industry.
  • Automated economies driven by AI could produce destabilizing concentrations of power or rapid disruptive transitions.
  • Evolutionary pressures may systematically favor AI systems and organizations that cut safety corners, independent of deliberate choices.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Apr 7, 20264 KB
1.3: AI Race | AI Safety, Ethics, and Society Textbook 
 

 

 
 
 Join the AISES course Take action Course overview Curriculum Buy print edition Download textbook 1. Overview of Catastrophic AI Risks 
 
 0.1 Preface 1.1 Overview of Catastrophic AI Risks 1.2 Malicious Use 1.3 AI Race 1.4 Organizational Risks 1.5 Rogue AIs 1.6 Discussion of Connections Between Risks 2. AI Fundamentals 
 
 2.1 AI Fundamentals 2.2 Artificial Intelligence & Machine Learning 2.3 Deep Learning 2.4 Scaling Laws 2.5 Speed of AI Development 2.6 AI Fundamentals Conclusion 3. Single Agent Safety 
 
 3.1 Single Agent Safety 3.2 Monitoring 3.3 Robustness 3.4 Alignment 3.5 Systemic Safety 3.6 Safety and General Capabilities 3.7 Conclusion 4. Safety Engineering 
 
 4.1 Safety Engineering 4.2 Risk Decomposition 4.3 Nines of Reliability 4.4 Safe Design Principles 4.5 Component Failure Accident Models and Methods 4.6 Systemic Factors 4.7 Tail Events and Black Swans 4.8 Conclusion 5. Complex Systems 
 
 5.1 Complex Systems 5.2 Introduction to Complex Systems 5.3 Complex Systems for AI Safety 5.4 Conclusion 6. Beneficial AI and Machine Ethics 
 
 6.1 Beneficial AI and Machine Ethics 6.2 Law 6.3 Fairness 6.4 The Economic Engine 6.5 Wellbeing 6.6 Preferences 6.7 Happiness 6.8 Social Welfare Functions 6.9 Moral Uncertainty 7. Collective Action Problems 
 
 7.1 Collective Action Problems 7.2 Game Theory 7.3 Cooperation 7.4 Conflict 7.5 Evolutionary Pressures 7.6 Conclusion 8. Governance 
 
 8.1 Governance 8.2 Growth 8.3 Distribution 8.4 Corporate Governance 8.5 National Governance 8.6 International Governance 8.7 Compute Governance 8.8 Conclusion 9. Appendices 
 
 9.1 App. A: Normative Ethics 10.1 App. B: Utility Functions 11.1 App. C: Reinforcement Learning 12.1 App. D: Long-Tailed and Thin-Tailed Distributions 13.1 App. E: Evolutionary Game Theory 14.1 App. F: Other Cooperation Mechanisms 15.1 App. G: Intrasystem Conflict Causes 16.1 Acknowledgements Print edition Download textbook Course Curriculum Take action Join the course 
 
 All 
 
 Section Appendix 1.3 AI Race

 Competitive pressures may lead militaries and corporations to hand over excessive power to AI systems. This could result in increased risks of large-scale wars, mass unemployment, and eventual loss of human control of economies and military systems.

 No items found. Review Questions

 What are two reasons automated warfare could increase the likelihood of military conflicts?

 ‍

 Answer:

 Leaders face less scrutiny over military operations since they don't have to risk soldiers' lives. Systems set up for automatic retaliation could escalate accidents into full-blown wars.

 ‍

 View Answer Hide Answer 
 
 How might competitive pressures undermine AI safety in corporations?

 ‍

 Answer:

 Corporations racing to release products first may cut corners on safety testing and training. This could lead to unsafe AI systems being deployed.

 ‍

 View Answer Hide Answer 
 
 What is one reason natural selection may 

... (truncated, 4 KB total)
Resource ID: 28cf9e30851a7bc2 | Stable ID: sid_AEsesdgSEy