Skip to content
Longterm Wiki
Back

AI-Assisted Bioterrorism Is Top Concern for OpenAI and Anthropic

web

Relevant to discussions of catastrophic risk evaluation and responsible deployment policies at frontier AI labs; illustrates how biosecurity has become a key focus of AI safety red-teaming efforts.

Metadata

Importance: 62/100news articlenews

Summary

A Semafor news article reporting on concerns from OpenAI and Anthropic that AI systems could assist malicious actors in developing bioweapons, drawing on findings from Gryphon Scientific's risk assessments. The piece highlights how frontier AI labs are prioritizing biosecurity as a critical safety concern in their red-teaming and deployment policies.

Key Points

  • OpenAI and Anthropic identify AI-assisted bioterrorism as among the most serious near-term risks from advanced AI systems.
  • Gryphon Scientific, a biosecurity consultancy, conducted assessments on how AI could lower barriers to bioweapon development.
  • AI models may provide 'uplift' to bad actors by filling in knowledge gaps that previously required specialized expertise.
  • Labs are implementing safeguards and evaluations specifically targeting biological weapons-related queries.
  • The concern reflects broader dual-use dilemmas where beneficial AI capabilities in biology could be misused.

Cited by 1 page

PageTypeQuality
Bioweapons RiskRisk91.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20267 KB
AI-assisted bioterrorism is top concern for OpenAI and Anthropic | Semafor 
 Intelligence for the New World Economy Semafor World Economy Why AI-assisted bioterrorism became a top concern for OpenAI and Anthropic

 Louise Matsakis Former Deputy News Editor Nov 15, 2023, 2:10pm EST Nov 15, 2023, 2:10pm EST Technology Share Al Lucca/Semafor Post Email Whatsapp Copy link Sign up for Semafor Technology: What’s next in the new era of tech. Read it now .

 Email address Sign Up In this article:

 The Scene

 Louise’s view

 Room for Disagreement

 Notable

 The Scene

 In the spring of 1995, U.S. lawmakers were becoming concerned that material uploaded to the nascent internet might pose a threat to national security. The Oklahoma City bombing had happened several weeks earlier, drawing attention to publications circulating online like The Big Book of Mischief , which included instructions on how to build homemade explosives.

 Worried the information could be used to orchestrate another attack, then-Senator Dianne Feinstein pushed to make publishing bomb recipes on the internet illegal. The effort sparked a national debate about “Open Access vs. Censorship,” as one newspaper headline put it at the time.

 Nearly 30 years later, a similar debate is now unfolding about artificial intelligence. Rather than DIY explosives, some U.S. officials and leading AI companies say they are increasingly worried that large language models could be used to develop biological weapons. The possibility has been repeatedly cited as one reason to be cautious about making AI systems open source.

 In a speech earlier this month, Vice President Kamala Harris invoked the specter of “AI-formulated bioweapons that could endanger the lives of millions.” She made the remarks two days after the White House issued an executive order on AI that instructs the federal government to create guardrails on using the technology to engineer dangerous biological materials.

 AD Dario Amodei, the CEO of AI startup Anthropic, similarly told Congress in July that AI could be misused in the near future to “cause large-scale destruction, particularly in the domain of biology.” His warning echoed concerns raised by OpenAI, think tanks such as the RAND Corporation, and an Oxford University researcher who claimed that “ ChatGPT could make bioterrorism horrifyingly easy .”

 But unlike the homemade bombs Congress was worried about in the 1990s, which had already killed hundreds of people, the idea that AI would make it easier to build a biological weapon remains a hypothetical one. Some biosecurity experts argue the complexity associated with engineering deadly pathogens is being underestimated, even if you have a powerful AI tool to help you do it.

 “With new technologies, we tend to project in the future as though their development was linear and straightforward, and we never take into consideration the challenges and the contingencies of the people using them,” said Sonia Ben Ouagrham-Gormley, an 

... (truncated, 7 KB total)
Resource ID: c5bed41f6d28d09e | Stable ID: MGFiMDU2ND