Microsoft's AI Finds Zero-Day Vulnerability in DNA Screening, Igniting a Biosecurity Arms Race
Microsoft researchers used AI to find a zero-day in DNA screening, digitally redesigning toxins to evade detection. Patches rolled out, but gaps remain, fueling an arms race.

Microsoft AI Uncovers "Zero-Day" Flaw in DNA Biosecurity
A team at Microsoft has used artificial intelligence to discover a "zero-day" vulnerability in biosecurity screening systems. These systems are the digital gatekeepers designed to prevent the misuse of DNA by stopping bad actors from acquiring genetic sequences for deadly toxins or pathogens.
Researchers led by Microsoft's chief scientist, Eric Horvitz, found a way to bypass these protections that was previously unknown. The team detailed its findings in the journal Science, signaling a new challenge at the intersection of AI and biology.
AI's Double-Edged Sword
The team focused on generative AI algorithms that propose new protein shapes. While these tools are accelerating the search for new medicines at companies like Generate Biomedicines and Google's Isomorphic Labs, they are also potentially "dual use."
An AI trained to generate beneficial molecules can also be prompted to create harmful ones. Recognizing this, Microsoft initiated a "red-teaming" exercise in 2023 to test if "adversarial AI protein design" could enable bioterrorism by helping create dangerous proteins.
How the System Was Bypassed
The core defense Microsoft attacked is biosecurity screening software. DNA synthesis vendors use this software to check incoming orders against databases of known threats. A close match triggers an alert, stopping the order.
Using several generative protein models, including its own called EvoDiff, the Microsoft team redesigned known toxins. They altered the protein structures just enough to slip past the screening software while ensuring their lethal function was predicted to remain intact.
This exercise was entirely digital. The researchers confirmed they never produced any physical toxic proteins, avoiding any risk or perception of developing bioweapons.
A New Arms Race Begins
Before publishing, Microsoft alerted the US government and the relevant software makers. These companies have already deployed patches to their systems, but the fix is not perfect. Some AI-designed molecules can still evade detection.
"The patch is incomplete, and the state of the art is changing," said Adam Clore, director of technology R&D at Integrated DNA Technologies and a coauthor of the report. "This isn't a one-and-done thing. It's the start of even more testing. We're in something of an arms race."
Where to Build the Walls?
The discovery has ignited a debate on the best point of defense. Dean Ball, a fellow at the Foundation for American Innovation, states the finding shows a "clear and urgent need for enhanced nucleic acid synthesis screening procedures" with strong enforcement.
However, others are skeptical. Michael Cohen, an AI-safety researcher at UC Berkeley, believes there will always be ways to disguise sequences. "The challenge appears weak, and their patched tools fail a lot," Cohen says. He argues security should be built into the AI systems themselves.
Clore counters that monitoring gene synthesis remains a practical choke point, as a few large companies dominate the industry and work with the government. In contrast, the technology to build and train AI models is widespread. "You can't put that genie back in the bottle," he says.