Skip to content
Longterm Wiki
Back

Research published in 2025

paper

Authors

Jan Kulveit·Raymond Douglas·Nora Ammann·Deger Turan·David Krueger·David Duvenaud

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

This paper introduces the concept of 'gradual disempowerment' to analyze how incremental AI capability improvements can systematically undermine human agency over critical societal systems, offering an important counterpoint to catastrophic takeover scenarios in AI safety discourse.

Paper Details

Citations
49
2 influential
Year
2025

Metadata

arxiv preprintprimary source

Abstract

This paper examines the systemic risks posed by incremental advancements in artificial intelligence, developing the concept of `gradual disempowerment', in contrast to the abrupt takeover scenarios commonly discussed in AI safety. We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states. As AI increasingly replaces human labor and cognition in these domains, it can weaken both explicit human control mechanisms (like voting and consumer choice) and the implicit alignments with human interests that often arise from societal systems' reliance on human participation to function. Furthermore, to the extent that these systems incentivise outcomes that do not line up with human preferences, AIs may optimize for those outcomes more aggressively. These effects may be mutually reinforcing across different domains: economic power shapes cultural narratives and political decisions, while cultural shifts alter economic and political behavior. We argue that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity. This suggests the need for both technical research and governance approaches that specifically address the risk of incremental erosion of human influence across interconnected societal systems.

Summary

This paper introduces the concept of 'gradual disempowerment' as a distinct AI safety concern, arguing that incremental improvements in AI capabilities—rather than sudden takeover scenarios—pose systemic risks to human influence over critical societal systems. As AI progressively replaces human labor and decision-making in economics, culture, and governance, it can erode both explicit control mechanisms (voting, consumer choice) and implicit human-aligned incentives that depend on human participation. The paper contends that misaligned AI optimization across interconnected domains could create mutually reinforcing feedback loops, potentially leading to irreversible loss of human agency and existential catastrophe. The authors call for technical and governance approaches specifically designed to address this incremental erosion of human influence.

Cited by 1 page

PageTypeQuality
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202698 KB
Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development 
 
 
 
 
 
 

 
 

 
 
 
 
 
 \SetWatermarkText \SetWatermarkLightness 
 1 \SetWatermarkScale 5

 \pdfcolInitStack tcb@breakable

 
 Gradual Disempowerment:
 Systemic Existential Risks from Incremental AI Development

 
 
 
Jan Kulveit 1,* 
 
 
 
 
 
Raymond Douglas 2,* 
 
 
 
 
 
Nora Ammann 3,1 
 
 
 
 
 
Deger Turan 4,5 
 
 
 
 
 
David Krueger 6 
David Duvenaud 7 
 
 
 
 
 
 Abstract

 This paper examines the systemic risks posed by incremental advancements in artificial intelligence, developing the concept of ‘gradual disempowerment’, in contrast to the abrupt takeover scenarios commonly discussed in AI safety. We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states. As AI increasingly replaces human labor and cognition in these domains, it can weaken both explicit human control mechanisms (like voting and consumer choice) and the implicit alignments with human interests that often arise from societal systems’ reliance on human participation to function. Furthermore, to the extent that these systems incentivise outcomes that do not line up with human preferences, AIs may optimize for those outcomes more aggressively. These effects may be mutually reinforcing across different domains: economic power shapes cultural narratives and political decisions, while cultural shifts alter economic and political behavior. We argue that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity. This suggests the need for both technical research and governance approaches that specifically address the risk of incremental erosion of human influence across interconnected societal systems.

 
 1 1 footnotetext: ACS research group, CTS, Charles University 2 2 footnotetext: Telic Research 3 3 footnotetext: Advanced Research + Invention Agency (ARIA) 4 4 footnotetext: AI Objectives Institute 5 5 footnotetext: Metaculus 6 6 footnotetext: Mila, University of Montreal 7 7 footnotetext: University of Toronto * * footnotetext: Equal contribution. Correspondence to jk@acsresearch.org 
 
 Executive Summary

 
 AI risk scenarios usually portray a relatively sudden loss of human control to AIs, outmaneuvering individual humans and human institutions, due to a sudden increase in AI capabilities, or a coordinated betrayal.
However, we argue that even an incremental increase in AI capabilities, without any coordinated power-seeking, poses a substantial risk of eventual human disempowerment.
This loss of human influence will be centrally driven by having more competitive machine alternatives to humans in almost all societal functions, such as economic labor, decision making, artistic creation, and even companionship.

 

... (truncated, 98 KB total)
Resource ID: 6e5785914e9a7f60 | Stable ID: sid_fGuEA4S1xt