AI Opens Door to New Biosecurity Threats and Bioweapon Risks

AI lowers barriers to creating biological threats by making complex pathogen data accessible. Experts warn this raises risks of both deliberate and accidental misuse.

Categorized in: AI News Science and Research
Published on: Aug 25, 2025
AI Opens Door to New Biosecurity Threats and Bioweapon Risks

AI’s Role in Increasing Biosecurity Threats

Artificial intelligence is accelerating scientific advances, but it also lowers barriers for creating biosecurity threats. Experts warn that AI’s growing capabilities make it easier to design harmful biological agents like viruses and toxins.

While biosecurity has gained attention since events like the COVID-19 pandemic and the 2001 anthrax attacks, recent AI developments have dramatically increased access to detailed information on dangerous pathogens. According to Lucas Hansen, cofounder of the AI education nonprofit CivAI, AI has expanded the number of people who could potentially create biological weapons.

Two Main AI Threat Vectors

Hansen identifies two primary ways AI can be misused in biosecurity:

  • Engineering new bioweapons: Using AI to design entirely new viruses or toxins through biological engineering.
  • Accessing existing information: Making detailed knowledge about known harmful pathogens and toxins more accessible and easier to understand.

For example, scientific literature on viruses like polio exists, but synthesizing it into practical steps was historically complex. Now, advanced AI models such as Claude and ChatGPT can interpret and fill information gaps, providing step-by-step instructions that previously required expert-level knowledge.

Risks of Accidental Misuse

Paromita Pain, associate professor at the University of Nevada, Reno, highlights risks beyond deliberate attacks. Increased accessibility to sensitive biological data and tools by untrained individuals could lead to accidental creation or release of pathogens. She compares it to “letting loose teenagers in the lab” — not out of malice, but due to lack of awareness of safety protocols.

How AI Facilitates Bio-Threat Creation

CivAI demonstrates how current AI models, even with built-in safeguards, can be bypassed or “jailbroken” to provide detailed instructions for creating bio-threats. Hansen reports that by prompting a jailbroken version of Claude 4.0 Sonnet, the AI furnished a 13-step guide for recreating the polio virus, including sourcing materials online.

The AI supplements scientific papers with accessible explanations, lowering the expertise required. While recreating such viruses still demands specialized equipment and rare materials, the core knowledge is no longer confined to experts. Neil Sahota, AI advisor to the United Nations, states that AI has shifted bioengineering from a Ph.D. level discipline toward something an ambitious high school student could attempt with the right tools.

CivAI estimates that since 2022, the global pool of people capable of recreating viruses like polio has grown from approximately 30,000 to 200,000, with projections reaching 1.5 million by 2028. The addition of multiple language capabilities in AI models further broadens this reach, reducing barriers posed by language differences.

Government Responses and Regulatory Challenges

Both the Biden and Trump administrations have recognized AI’s potential biosecurity risks. Biden’s October 2023 Executive Order seeks to audit AI capabilities for harms including biosecurity threats. Trump’s AI Action Plan proposed enforcing customer verification among federally funded scientific institutions and enhancing data sharing among nucleic acid synthesis providers to detect malicious activity.

California’s SB 1047 aimed to regulate foundational AI models to prevent misuse, including development of weapons of mass destruction. However, Governor Gavin Newsom vetoed the bill, citing concerns it might stifle innovation.

Experts acknowledge the challenge in regulating AI bioengineering tools. These technologies hold promise for vaccine development and genetic research but can also be misused. As Sahota points out, AI itself is neutral; the risk lies in how it’s used.

Currently, international frameworks for sharing biological data and governing AI’s use in biosecurity are limited. Accountability among AI developers, biologists, publishers, and governments remains unclear.

Experts worry that meaningful regulation may only follow a high-profile bio-attack involving AI. Hansen warns that nihilistic individuals, inspired by past mass shootings, might turn to bioweapons, potentially normalizing their use and triggering copycat incidents.

Conclusion

Artificial intelligence is rapidly changing the landscape of biosecurity. It democratizes access to complex biological information, lowering technical barriers to creating biological threats. While the technology holds immense potential in medicine and research, its misuse poses grave risks.

The challenge lies in balancing innovation with safety through effective regulation, international cooperation, and increased awareness among scientific and security communities. For professionals in science and research, staying informed and involved in biosecurity discussions is crucial to mitigating emerging threats.

For those interested in AI's impact on various fields, including security, explore comprehensive AI education resources at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)