Skip to content
Longterm Wiki
Back

The AI and Biological Weapons Threat

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: RAND Corporation

A 2023 RAND empirical study directly relevant to catastrophic risk from AI misuse; provides early evidence on LLM dual-use risks in bioweapons contexts, informing debates about frontier model deployment safeguards and biosecurity policy.

Metadata

Importance: 72/100organizational reportprimary source

Summary

This RAND Corporation report examines the misuse risks of large language models (LLMs) in biological weapons development through a red-team methodology. Preliminary findings show that while LLMs haven't provided explicit weapon-creation instructions, they do offer guidance useful for planning biological attacks, including agent selection and acquisition strategies. The authors caution that AI's rapid advancement may outpace regulatory oversight, closing historical information gaps that previously hindered bioweapon development.

Key Points

  • LLMs did not generate explicit bioweapon instructions but provided actionable planning guidance including agent identification and distribution strategies.
  • In a plague pandemic scenario, an LLM assessed obtaining and distributing Yersinia pestis while estimating variables affecting projected death tolls.
  • In a botulinum toxin scenario, an LLM suggested aerosol delivery methods and proposed cover stories for acquiring dangerous biological agents.
  • AI advancement may outpace regulatory oversight, closing information gaps that previously caused biological attacks to fail.
  • The full real-world operational impact of LLMs on bioweapon planning remains an open research question requiring further study.

Cited by 4 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 20268 KB
The Operational Risks of AI in Large-Scale Biological Attacks: A Red-Team Approach | RAND
 
 

 
 

 

 

 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 
 
 

 

 

 

 

 

 
 
 
 

 Feb
 MAR
 Apr
 

 
 

 
 01
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Save Page Now Outlinks

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20260301055948/https://www.rand.org/pubs/research_reports/RRA2977-1.html

 
 

Skip to page content

 

 

 

 
 
 

 

 Toggle Menu
 

 
Site-wide navigation

 
 

 
Topics

 

 
Trending

 

International Trade

Iran

Artificial Intelligence

Chronic Diseases and Conditions

Emergency Preparedness

 
Topics

 
Children, Families, and Communities

Cyber and Data Sciences

Education and Literacy

Energy and Environment

Health, Health Care, and Aging

Homeland Security and Public Safety

Infrastructure and Transportation

International Affairs

Law and Business

National Security

Science and Technology

Social Equity

Workers and the Workplace

 All Topics
 

 

 

 Research & Commentary
 

 

 Experts
 

 

 About
 

 
 

 
 

 
Research Divisions

 

 
RAND's divisions conduct research on a uniquely broad front for clients around the globe.

 
 

 
U.S. research divisions

 
 
RAND Army Research Division

 
RAND Education, Employment, and Infrastructure

 
RAND Global and Emerging Risks

 
RAND Health

 
RAND Homeland Security Research Division

 
RAND National Security Research Division

 
RAND Project AIR FORCE

 
 

 

 
International research divisions

 
 
RAND Australia

 
RAND Europe

 
 

 
 

 

 

 Services & Impact
 

 

 Careers
 

 

 Graduate School
 

 

 Subscribe
 

 

 Give
 

 
Cart

 

 

 

 
 
 
 Toggle Search
 
Search termsSubmit

 
 

 

 

 
RAND

Research & Commentary

Research Reports

RR-A2977-1

 

 
 

 

 

In this report, the authors address the emerging issue of identifying and mitigating the risks posed by the misuse of artificial intelligence (AI)—specifically, large language models—in the context of biological attacks and present preliminary findings of their research. They find that while AI can generate concerning text, the operational impact is a subject for future research.

 

 

 

The Operational Risks of AI in Large-Scale Biological Attacks

A Red-Team Approach

Christopher A. Mouton, Caleb Lucas, Ella Guest

 ResearchPublished Oct 16, 2023

 

 
 
 

 

 
 

 
 

 

 

 

 Download PDF
 
 

 
 
 
 

 

 

Share on LinkedIn

Share on X

Share on Facebook

Email

The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including its potential to be applied in the development of advanced biological weapons. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading

... (truncated, 8 KB total)
Resource ID: 73c1b835c41bcbdb | Stable ID: YmQyZDAyMz