Cybersecurity experts push government and business to regulate agentic AI before attacks escalate
Agentic AI systems can scan for vulnerabilities across networks faster than humans ever could. They're also becoming the weapon of choice for cybercriminals. That mismatch is pushing cybersecurity experts to demand that policymakers act now on regulation.
A panel of experts convened by Harvard's Berkman Klein Center for Internet and Security agreed this month that business and government leaders need to establish rules for the technology before attacks cause widespread damage to personal data, the economy, and national security.
The warning comes as cyberattacks surge. IBM's 2026 study found a 44 percent year-over-year increase in attacks targeting public-facing software and systems - many using AI. A November breach at Anthropic showed the real risk: attackers used their own AI models to find weak spots in the company's source code and expose its inner workings.
The asymmetry problem
James Mickens, Gordon McKay Professor of Computer Science at Harvard, framed the core challenge: "The bad people only have to win once, whereas the defenders have to win all the time."
Phishing has become particularly dangerous. Robert Knake, partner at Paladin Capital and former deputy national cyber director at the White House, noted that AI now eliminates the telltale signs that used to expose fraudulent emails. "A year ago, we still had email messages in our inbox that had misspellings that were not colloquial English, that were easy to identify if you were vigilant. Now, all those signals are gone," he said.
What regulation could look like
Knake argued for a federal approach that incentivizes security without stifling software development. He proposed creating a "safe harbor" where companies that follow basic practices - such as using current, secure versions of open-source packages - would not be held liable for breaches. Companies that skip these steps would face liability.
But Mickels cautioned that regulation faces real obstacles. The threat model changes with AI. Traditional security measures worked because they targeted known attack patterns. With agentic AI, attackers can send novel commands and trick systems in unpredictable ways.
Josephine Wolff, associate dean for research and professor of cybersecurity policy at the Fletcher School at Tufts University, highlighted another barrier: companies struggle to maintain accurate inventories of all code running on their networks. Without that visibility, detecting vulnerabilities becomes nearly impossible.
Why private retaliation is a bad idea
Some in the cybersecurity world argue that hacked companies should be allowed to "hack back" against attackers. The experts rejected this outright.
Wolff warned that allowing private companies to conduct offensive cyber operations would likely escalate conflicts rather than resolve them. Large firms with legal teams might act carefully, but smaller companies would see an opportunity to target adversaries like North Korea without restraint. "The idea that you're going to bring in the private sector and have that lead to anything but greater chaos seems hopelessly optimistic to me," she said.
Mickels painted an even darker scenario: autonomous agentic firewalls that detect intrusions, trace attackers, and launch counteroffensives in real time. "That world very quickly degenerates into essentially high-frequency trading, except now in cyber security, where you just have a bunch of algorithms going back and forth and reacting to each other in very real time," he said.
Identity verification as a partial answer
The panelists discussed digital identity systems as a way to combat AI-driven phishing. If people could verify that they're communicating with a real person, trust would increase and fraud would become harder.
Knake said this solves a 30-year-old problem: "We are going to have to know with certainty who we are dealing with, and that it is a real person if they are claiming to be a real person."
Mickels noted a practical complication. Many people want to be identified as human without revealing their full identity - abuse survivors, runaways, or whistleblowers, for example. They need consistent pseudonyms tied to their actions, not their legal names. Any digital ID system would need to accommodate these scenarios.
The window for action
Knake pointed to a near-term opportunity. AI systems can now monitor user behavior in real time and flag patterns that look like fraud. "We can do this. We just need to find the right market players who will make that investment and build that technology," he said.
The experts agree that government and business have a narrow window to establish rules before agentic AI systems become too embedded in attack and defense operations to regulate effectively. The longer they wait, the harder course correction becomes.
For government professionals tasked with policy and cybersecurity oversight, understanding how agentic AI changes the threat model is no longer optional. See AI for Government and the AI Learning Path for Cybersecurity Analysts for deeper context.
Your membership also unlocks: