Skip to content
Longterm Wiki
Back

The Malicious Use of AI - Future of Humanity Institute

paper

Authors

Miles Brundage·Shahar Avin·Jack Clark·Helen Toner·Peter Eckersley·Ben Garfinkel·Allan Dafoe·Paul Scharre·Thomas Zeitzoff·Bobby Filar·Hyrum Anderson·Heather Roff·Gregory C. Allen·Jacob Steinhardt·Carrick Flynn·Seán Ó hÉigeartaigh·SJ Beard·Haydn Belfield·Sebastian Farquhar·Clare Lyle·Rebecca Crootof·Owain Evans·Michael Page·Joanna Bryson·Roman Yampolskiy·Dario Amodei

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Co-authored by 26 researchers from FHI, CSER, and OpenAI, this 2018 report is one of the most widely cited works on AI misuse risks and helped establish dual-use governance as a serious field of inquiry within AI safety.

Paper Details

Citations
855
39 influential
Year
2018

Metadata

Importance: 85/100arxiv preprintprimary source

Abstract

This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.

Summary

A landmark 2018 report from the Future of Humanity Institute, Centre for the Study of Existential Risk, and OpenAI analyzing how AI could be misused by malicious actors across digital, physical, and political domains. It forecasts emerging threats over the next 5-10 years and proposes recommendations for researchers, policymakers, and industry to mitigate dual-use risks. The report is widely cited as a foundational framework for thinking about AI misuse and governance.

Key Points

  • Identifies three primary threat domains: digital security (cyberattacks, malware), physical security (autonomous weapons, drones), and political security (disinformation, manipulation).
  • Argues AI lowers the cost and expertise barrier for malicious actors, enabling attacks at unprecedented scale and speed.
  • Calls on AI researchers to treat dual-use concerns as a core professional responsibility, similar to biosecurity norms in biology.
  • Recommends red-teaming, restricted publication norms, and closer collaboration between AI researchers and security communities.
  • Highlights that AI-enabled disinformation and synthetic media (deepfakes) pose novel and underappreciated political risks.

Cited by 1 page

PageTypeQuality
AI ProliferationRisk60.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20265 KB
[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
--> 

 
 
 Computer Science > Artificial Intelligence

 

 
 arXiv:1802.07228 (cs)
 
 
 
 
 
 [Submitted on 20 Feb 2018 ( v1 ), last revised 1 Dec 2024 (this version, v2)] 
 Title: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

 Authors: Miles Brundage , Shahar Avin , Jack Clark , Helen Toner , Peter Eckersley , Ben Garfinkel , Allan Dafoe , Paul Scharre , Thomas Zeitzoff , Bobby Filar , Hyrum Anderson , Heather Roff , Gregory C. Allen , Jacob Steinhardt , Carrick Flynn , Seán Ó hÉigeartaigh , SJ Beard , Haydn Belfield , Sebastian Farquhar , Clare Lyle , Rebecca Crootof , Owain Evans , Michael Page , Joanna Bryson , Roman Yampolskiy , Dario Amodei View a PDF of the paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, by Miles Brundage and 25 other authors 
 View PDF 

 
 Abstract: This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.
 

 
 
 
 Subjects: 
 
 Artificial Intelligence (cs.AI) ; Cryptography and Security (cs.CR); Computers and Society (cs.CY) 
 
 Cite as: 
 arXiv:1802.07228 [cs.AI] 
 
 
 
 (or 
 arXiv:1802.07228v2 [cs.AI] for this version)
 
 
 
 
 https://doi.org/10.48550/arXiv.1802.07228 
 
 
 Focus to learn more 
 
 
 
 arXiv-issued DOI via DataCite 
 
 
 
 
 
 
 
 Submission history

 From: Miles Brundage [ view email ] 
 [v1] 
 Tue, 20 Feb 2018 18:07:50 UTC (1,400 KB)

 [v2] 
 Sun, 1 Dec 2024 17:59:04 UTC (1,400 KB)

 
 
 
 
 
 Full-text links: 
 Access Paper:

 
 
View a PDF of the paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, by Miles Brundage and 25 other authors View PDF 
 
 view license 
 
 
 Current browse context: cs.AI 

 
 
 < prev 
 
 | 
 next > 
 

 
 new 
 | 
 recent 
 | 2018-02 
 
 Change to browse by:
 
 cs 
 cs.CR 
 cs.CY 
 
 

 
 
 References & Citations

 
 NASA ADS 
 Google Scholar 

 Semantic Scholar 

 
 
 

 
 
 2 blog links 

 ( what is this? )
 
 
 
 DBLP - CS Bibliography

 
 listing | bibtex 
 
 Miles Brundage 
 Shahar Avin 
 Jack Clark 
 Helen Toner 
 Peter Eckersley &hellip; 
 
 
 export BibTeX citation 
 Loading... 
 

 
 
 
 BibTeX formatted citation

 &times; 
 
 
 loading... 
 
 
 Data provided by

... (truncated, 5 KB total)
Resource ID: 14e0d91b4194cd13 | Stable ID: sid_3s7yV76l9R