Skip to content
Longterm Wiki
Back

Misalignment or misuse? The AGI alignment tradeoff

paper

Authors

Max Hellrigel-Holderbaum·Leonard Dung

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

This paper analyzes the fundamental tension between AGI alignment and misuse risks, arguing that both pose severe catastrophic threats and examining how these risks relate to each other in the context of developing safe and beneficial artificial general intelligence.

Paper Details

Citations
3
0 influential
Year
2025

Metadata

arxiv preprintprimary source

Abstract

Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI - future, generally intelligent (robotic) AI agents - poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one another, we show that - in principle - there is room for alignment approaches which do not increase misuse risk. We then investigate how the tradeoff between misalignment and misuse looks empirically for different technical approaches to AI alignment. Here, we argue that many current alignment techniques and foreseeable improvements thereof plausibly increase risks of catastrophic misuse. Since the impacts of AI depend on the social context, we close by discussing important social factors and suggest that to reduce the risk of a misuse catastrophe due to aligned AGI, techniques such as robustness, AI control methods and especially good governance seem essential.

Summary

This paper examines the tension between two catastrophic risks posed by advanced AI: misalignment (AGI pursuing unintended goals) and misuse (humans weaponizing aligned AGI). The authors argue that while both risks are severe, alignment approaches need not inherently increase misuse risk. However, they find empirically that many current alignment techniques plausibly increase catastrophic misuse potential. The paper concludes that addressing misuse risks from aligned AGI requires complementary approaches including robustness, AI control methods, and strong governance frameworks alongside traditional alignment work.

Cited by 1 page

PageTypeQuality
Agentic AICapability68.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20265 KB
[2506.03755] Misalignment or misuse? The AGI alignment tradeoff 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
--> 

 
 
 Computer Science > Computers and Society

 

 
 arXiv:2506.03755 (cs)
 
 
 
 
 
 [Submitted on 4 Jun 2025] 
 Title: Misalignment or misuse? The AGI alignment tradeoff

 Authors: Max Hellrigel-Holderbaum , Leonard Dung View a PDF of the paper titled Misalignment or misuse? The AGI alignment tradeoff, by Max Hellrigel-Holderbaum and Leonard Dung 
 View PDF 

 
 Abstract: Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI - future, generally intelligent (robotic) AI agents - poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one another, we show that - in principle - there is room for alignment approaches which do not increase misuse risk. We then investigate how the tradeoff between misalignment and misuse looks empirically for different technical approaches to AI alignment. Here, we argue that many current alignment techniques and foreseeable improvements thereof plausibly increase risks of catastrophic misuse. Since the impacts of AI depend on the social context, we close by discussing important social factors and suggest that to reduce the risk of a misuse catastrophe due to aligned AGI, techniques such as robustness, AI control methods and especially good governance seem essential.
 

 
 
 
 Comments: 
 Forthcoming in Philosophical Studies 
 
 
 Subjects: 
 
 Computers and Society (cs.CY) ; Artificial Intelligence (cs.AI) 
 
 Cite as: 
 arXiv:2506.03755 [cs.CY] 
 
 
 
 (or 
 arXiv:2506.03755v1 [cs.CY] for this version)
 
 
 
 
 https://doi.org/10.48550/arXiv.2506.03755 
 
 
 Focus to learn more 
 
 
 
 arXiv-issued DOI via DataCite 
 
 
 
 
 
 
 
 Submission history

 From: Max Hellrigel-Holderbaum [ view email ] 
 [v1] 
 Wed, 4 Jun 2025 09:22:37 UTC (431 KB)

 
 
 
 
 
 Full-text links: 
 Access Paper:

 
 
View a PDF of the paper titled Misalignment or misuse? The AGI alignment tradeoff, by Max Hellrigel-Holderbaum and Leonard Dung View PDF 
 
 
 
 view license 
 
 
 
 Current browse context: cs.CY 

 
 
 < prev 
 
 | 
 next > 
 

 
 new 
 | 
 recent 
 | 2025-06 
 
 Change to browse by:
 
 cs 
 cs.AI 
 
 

 
 
 References & Citations

 
 NASA ADS 
 Google Scholar 

 Semantic Scholar 

 
 
 

 
 export BibTeX citation 
 Loading... 
 

 
 
 
 BibTeX formatted citation

 &times; 
 
 
 loading... 
 
 
 Data provided by: 
 
 
 
 
 Bookmark

 
 
 
 
 
 
 
 
 
 
 
 Bibliographic Tools 
 
 Bibliographic and Citation Tools

 
 
 
 
 
 
 Bibliographic Explorer Toggle 
 
 
 
 Bibliographic Explorer ( What is the Explorer? ) 
 
 
 
 
 
 
 
 Connected Papers Toggle 
 
 
 
 Connecte

... (truncated, 5 KB total)
Resource ID: bb34533d462b5822 | Stable ID: YTcyZGIyOD