Government's Absence Looms Over AI Debate at RSAC 2026
The Trump administration's decision to block federal agencies from attending the RSA Conference in San Francisco last month created a notable gap in what's traditionally been a crucial forum for public-private dialogue on cybersecurity. The Cybersecurity and Infrastructure Security Agency, a key participant in past years, was unable to attend due to both political pressure and a partial government shutdown.
The absence highlighted a broader pattern: the federal government is pulling back from the cybersecurity community just as businesses are racing ahead with artificial intelligence deployments that carry significant risks.
The AI Split: Speed vs. Oversight
Two competing visions of AI dominated conference discussions. Executive-level leaders expressed enthusiasm for agentic AI - autonomous systems that can analyze data, generate reports, and flag threats without human intervention. Some see the technology as a way to reduce labor costs while freeing analysts from repetitive work.
Security researchers and practitioners pushed back. They warned that deploying AI agents without human oversight creates dangerous vulnerabilities. One researcher at the conference said YARA rules written by AI should be deleted immediately because they're unreliable. Another panel highlighted how AI-assisted coding tools are punching holes through years of security hardening, potentially setting defenses back a decade.
The most concrete example came from an Exabeam security leader who described an agentic AI system that independently flagged a North Korean malicious insider on their first day of work within hours of login. The system worked as intended. But such successes don't address the broader problem: most companies are deploying these tools without governance frameworks, privilege controls, or systematic human review.
The CVE Program Under Strain
AI is creating a secondary crisis for the vulnerability management system that underpins all cybersecurity work. The CVE program, which assigns identifiers to reported vulnerabilities, is already underfunded and facing potential loss of government support.
Now AI agents are flooding the system with vulnerability reports at a scale human reviewers can't handle. Many reports are low-quality or describe vulnerabilities that don't exist. A GitHub representative speaking at the conference described the volume as overwhelming - most submissions are what one analyst called "garbage."
The program was already struggling before AI accelerated the problem. European governments are building alternative vulnerability databases, signaling that the U.S. system may not survive in its current form.
Where Government Silence Matters Most
The absence of federal agencies meant no coordinated messaging on policy direction. Spyware regulation, AI governance, and public-private partnership opportunities all went unaddressed by government representatives.
People working at agencies told reporters they're "flying blind" - lacking communication from leadership about strategy or next steps. Many experienced staff have left their positions. The vacuum creates uncertainty precisely when the private sector needs regulatory clarity to make responsible decisions about AI deployment.
Conference organizers chose "the power of community" as this year's theme, emphasizing the need for human involvement in security decisions. That message rings hollow without government participation. Building effective AI governance requires public-private collaboration, and that collaboration requires government to show up.
The Pressure to Move Fast
Despite warnings about vulnerabilities and risks, companies face mounting pressure to deploy AI quickly. Cost reduction and competitive advantage drive the push forward. Few organizations are pausing to ask whether they've implemented adequate safeguards.
One researcher expressed surprise at how cavalier companies were being, deploying AI coding assistants and agentic systems without acknowledging the attack surface they were creating. That attitude is unlikely to change without external pressure - either from regulation or from visible consequences.
The conference revealed a market racing ahead while government retreats. For federal employees working on cybersecurity policy, the absence of leadership presence at RSAC signals a broader disengagement from the sector they're supposed to protect. That gap will have consequences.
Learn more about AI's role in security operations or explore AI governance for government professionals.
Your membership also unlocks: