Skip to content
Longterm Wiki
Back

Could AI help bioterrorists unleash a new pandemic?

web

A 2024 Bulletin of the Atomic Scientists piece summarizing empirical research on AI biosecurity uplift risk; relevant to debates about AI capability thresholds, deployment safeguards, and biosecurity governance.

Metadata

Importance: 62/100news articleanalysis

Summary

This Bulletin of the Atomic Scientists article covers research examining whether current AI systems provide meaningful 'uplift' to would-be bioterrorists seeking to create or deploy pandemic pathogens. The study suggests that as of early 2024, AI does not yet provide substantial additional capability beyond what is already accessible, though the risk trajectory warrants continued monitoring.

Key Points

  • Current AI systems do not yet provide significant 'uplift' to bad actors seeking to develop biological weapons or engineer pandemics.
  • The study evaluates AI's ability to assist with key steps in bioweapon development, finding existing knowledge gaps remain meaningful barriers.
  • Even without decisive uplift now, rapid AI capability growth makes biosecurity preparedness and AI access controls increasingly urgent.
  • The dual-use nature of AI in biology—beneficial for drug discovery but potentially dangerous—complicates straightforward policy responses.
  • Authors recommend ongoing red-teaming and evaluation of AI biosecurity risks as capabilities evolve.

Cited by 1 page

PageTypeQuality
Bioweapons RiskRisk91.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202614 KB
Could AI help bioterrorists unleash a new pandemic? A new study suggests not yet - Bulletin of the Atomic Scientists 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 

 
 
 

 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 About 

 Magazine 

 Events 

 Contact 

 Store 

 My Account 
 Login 
 Donate 
 
 Give Now 

 Ways To Give 

 What Your Gift Supports 

 Annual Event 

 Einstein Circle 

 Legacy Society 

 Donor-Advised Fund 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 Follow our Iran coverage

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Could AI help bioterrorists unleash a new pandemic? A new study suggests not yet 
 

 
 By Matt Field | Article | January 25, 2024

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 Medical staff take care of a patient with COVID-19. Credit: Gustavo Basso via Wikimedia Commons. CC BY-SA 4.0. 

 
 
 Share 
 
 
 
 

 
 
 
 
 
 Copy link 
 
 Linked copied 
 
 
 

 
 
 
 Email 
 

 
 Facebook 
 

 
 
 
 
 
 Bluesky 
 

 
 Twitter 
 

 
 LinkedIn 
 

 
 WhatsApp 
 

 
 Reddit 
 

 
 
 
 
 
 
 
 
 
 Spread 
 

 
 

 
 Could new AI technology help unleash a devastating pandemic? That’s a concern top government officials and tech leaders have raised in recent months. One study last summer found that students could use chatbots to gain the know-how to devise a bioweapon. The United Kingdom brought global political and tech leaders together last fall to underscore the need for AI safety regulation. And in the United States, the Biden administration unveiled a plan to probe how emerging AI systems might aid in bioweapons plots. But a new report suggests that the current crop of cutting-edge AI systems might not help malevolent actors launch an unconventional weapons attack as easily as is feared.

 The new RAND Corporation report found that study participants who used an advanced AI model plus the internet fared no better in planning a biological weapons attack than those who relied solely on the internet, which is itself a key source of the information that systems like ChatGPT train on to rapidly produce cogent writing. The internet already contains plenty of useful information for bioterrorists. “You can imagine a lot of the things people might worry about may also just be on Wikipedia,” Christopher Mouton, a senior engineer at the RAND Corporation who co-authored the new report said in an interview before its publication.

 Mouton and his colleagues had 12 cells comprising three members who were given 80 hours each over seven weeks to develop plans based on one of four bioweapons attack scenarios. For example, one scenario involved a “fringe doomsday cult intent on global catastrophe.” Another posited a private military company seeking to aide an adversary’s conventional military operation. Some cells used AI, others only the internet. A group of experts then judged the plans these red teams devised. The judges were experts in biology or security; they weighed in on the biological and ope

... (truncated, 14 KB total)
Resource ID: a3cecbd6bf0ee45b | Stable ID: MjllN2RkMD