Harvard Professor Warns of Growing Battle Between Tech Companies and Federal Government Over AI Control
Conflicts between the federal government and commercial AI companies are escalating, with the stakes centered on who controls increasingly powerful systems that could identify and exploit software vulnerabilities across both private and public networks.
The tension came into sharp focus in late February when the Department of Defense formally designated AI company Anthropic a "supply chain risk," effectively barring it from Pentagon contracts. The move followed Anthropic CEO Dario Amodei's refusal to allow the company's models to be used for autonomous lethal attacks or mass surveillance targeting Americans. Anthropic sued the federal government in response.
Other disputes are emerging between private AI developers and government agencies over training data use and consumer privacy violations.
Jonathan Zittrain, a Harvard Law School professor and expert on internet history, sees parallels between today's AI conflicts and early internet governance battles. He argues that the current arrangement-where government funds basic research but private companies commercialize the technology-creates fundamental problems.
Why This Matters for Legal Professionals
The liability question sits at the center of this conflict. AI for Legal professionals need to understand how courts may eventually resolve who bears responsibility when AI systems produce harmful outputs.
Zittrain points out that AI companies are making arguments nearly identical to those social media platforms used in the 1990s: they claim to be neutral conduits not responsible for what their systems produce. But that argument has weakened as platforms became multi-billion-dollar businesses with resources to moderate content.
"Today, when the major platforms are multi-billion-dollar businesses, that argument is harder to sustain," Zittrain said. "Platforms ought to be able to pay the costs of finding and limiting unlawful content."
AI companies face a different legal position than social media platforms. They cannot credibly claim their models produce third-party content when the outputs originate from systems they built and trained. This distinction could prove decisive in future liability cases.
The Software Security Problem
Anthropic's unreleased Mythos model presents a specific flashpoint. The company claims the system can identify and exploit critical software flaws across much of the internet at scale.
Zittrain said the broader issue predates AI: cybersecurity costs are not currently borne by the software companies that create vulnerabilities. "That's normally the kind of market failure government rises to address," he said. If Mythos or successor models expose widespread vulnerabilities, regulators may abandon their hands-off approach.
A coalition called Project Glasswing, launched in April by AWS, Anthropic, Apple, Google, Microsoft, and others, aims to secure critical software. The initiative signals that the private sector recognizes the stakes.
The Internet Precedent
Zittrain has written extensively about how the open internet-built on government-funded research but developed by private companies-avoided the gatekeeping that might otherwise have emerged. That openness created both enormous benefits and security risks.
AI development is following a similar path. The government funded foundational research, but private companies now control the most capable systems. Unlike highways, aviation, or broadcasting-technologies the government either built or heavily regulated-AI remains largely in private hands.
The question facing regulators and courts is whether that arrangement still serves the public interest as AI systems grow more powerful and consequential.
AI for Government professionals should expect this conflict to intensify as agencies seek greater control over systems that could affect national security and public safety.
Your membership also unlocks: