At RSAC 2026, AI Dominance Clashes With Calls for Human Oversight
Artificial intelligence dominated discussions at this year's RSA Conference, but security operations leaders heard conflicting messages: some executives pushed for faster AI deployment with minimal human involvement, while researchers warned that uncontrolled systems could undo a decade of security improvements.
The conference theme, "The Power of Community," underscored the need for human judgment in AI decisions. Yet the tension between speed and safety ran through nearly every session. More than two-thirds of the conference sessions included AI components.
The Government Absence Signals Uncertainty
The U.S. federal government pulled out of the conference weeks before it began, leaving a notable gap in discussions about national cybersecurity strategy and AI governance. The European Union and other governments sent representatives to discuss their approaches, but attendees said the U.S. absence raised questions about government commitment to public-private partnerships.
This timing complicated matters. The government released a cybersecurity strategy recently, but chose not to detail its implementation at a venue where industry stakeholders gather annually to align on priorities. Researchers and vendors said they now operate without clear direction from federal agencies on critical issues like spyware policy and AI oversight.
AI Coding Tools Are Creating Security Holes
Check Point researchers presented findings that AI coding assistants are punching holes through network defenses built over two decades. The tools give attackers a direct path from employee workstations to sensitive systems and development environments-routes that security teams spent years closing.
The researchers expressed surprise at how many organizations deployed these tools without evaluating the security trade-offs. Companies are racing ahead with AI adoption to reduce costs and increase efficiency, often without pausing to assess the expanded attack surface.
This pressure to move fast is real. Some executives argue that human oversight slows down AI deployment and defeats the purpose of automation. Security researchers counter that uncontrolled agentic systems create liability and operational risk.
SOC Leaders See Real Benefits-But Need Guardrails
The Exabeam CISO shared an example that illustrated AI's potential: an agentic system deployed in their security operations center identified a malicious insider on his first day, flagging suspicious activity within hours of login. The system handled the pattern recognition task that would have taken human analysts much longer.
This points to where many conference discussions landed: AI works best when it handles high-volume, repetitive threat analysis while humans maintain oversight. SOC analysts are overworked. If AI can filter noise and surface actionable threats, that reduces burnout and improves response times.
The governance model that emerged emphasizes periodic human review of AI decisions. When an AI agent makes mistakes-miscategorizing alerts, mislabeling threats-human supervisors should catch these errors during routine checks. Nothing, human or machine, is infallible at scale.
The CVE Program Faces an AI Deluge
The Common Vulnerabilities and Exposures program is already struggling with funding and staffing. Now it faces another problem: AI agents are flooding the system with vulnerability reports, many of low quality or entirely fabricated.
A GitHub representative on a CVE panel said the volume of submissions has become staggering. AI agents competing on vulnerability-reporting leaderboards generate massive numbers of reports, most of which waste time for program staff who must filter and classify them.
This creates a vicious cycle. The program already lacks resources. More AI-generated noise makes the backlog worse. Some in the security community are building alternatives, including systems developed by the European Union, because they question whether the current CVE infrastructure can sustain this pace.
Model Collapse Threatens Data Quality
Diana Kelly, CISO at Noma Security, discussed a separate risk: as AI models consume their own generated content, output quality degrades. If AI systems train on outputs from other AI systems, the cycle produces increasingly poor results.
This connects to the conference's theme. Human expertise and judgment remain essential. Without human involvement in validating data and directing AI systems, organizations risk feeding garbage into systems that then produce more garbage.
What Operations Teams Should Watch
For security operations leaders, the conference revealed two competing pressures. Business leaders want faster threat detection and lower labor costs through AI automation. Security researchers want governance frameworks that prevent costly mistakes.
The practical answer is neither extreme. Organizations need AI for threat analysis and alert triage, but also structured oversight of AI agents. Human supervisors reviewing AI decisions periodically catch errors before they cascade.
The absence of federal guidance means operations teams should establish their own frameworks now. Define what tasks AI handles autonomously, which require human approval, and how frequently humans review system behavior. The conference made clear that organizations rushing to deploy AI without these boundaries are creating security problems, not solving them.
Your membership also unlocks: