Agentic AI speeds up code development but strains traditional security practices, Sonar VP says

AI agents that write and test code at machine speed are outpacing traditional security reviews. Experts say validation must move into the code generation process itself, catching flaws before they compound.

Categorized in: AI News IT and Development
Published on: May 05, 2026
Agentic AI speeds up code development but strains traditional security practices, Sonar VP says

Agentic AI Demands a Rethinking of Code Security

AI agents that autonomously write, test, and iterate on code are accelerating software development. This speed introduces a critical problem: traditional security practices can no longer keep pace with machine-generated output.

Jeremy Katz, VP of Code Security at Sonar, outlined the shift in how development workflows operate. Instead of developers writing code line by line, AI agents receive high-level objectives and execute them end-to-end - generating code, creating tests, running them, and iterating based on results.

The efficiency gains are real. Teams move from concept to working output in significantly shorter cycles. The tradeoff is structural: developers transition from direct creators to reviewers and guides, a change that fundamentally alters how oversight and validation work.

The Security Risk of Unchallenged Assumptions

When code is generated at machine speed, developers lose granular understanding of implementation details. AI systems handle the execution, but they also propagate initial assumptions rigidly throughout the codebase.

Unlike human developers, who naturally question and revise their work, AI agents follow their initial assumptions without challenge. Errors introduced early compound as the system builds on faulty foundations. By the time traditional security scanning catches these issues in continuous integration, vulnerabilities are deeply embedded and expensive to fix.

Manual code review cannot scale to match machine output. The volume alone makes human-only oversight impractical.

Moving Validation Earlier in Development

Organizations need to embed security checks directly into the code generation process itself - what practitioners call moving into the "inner loop."

This means real-time validation as code is written, not afterward. Automated guardrails and deterministic verification systems evaluate generated code against predefined security and quality standards. Hardcoded secrets, insecure dependencies, and policy violations get caught at the point of creation.

Technologies like automated quality gates and policy enforcement tools define what acceptable code looks like. Consistent, scalable validation replaces reliance on human judgment alone.

Security Teams Must Become Enablers, Not Gatekeepers

Resisting AI adoption is not viable. Organizational pressure for speed and efficiency will continue to intensify. Security teams that block adoption will lose influence.

The effective path is different: security teams should design guardrails, define standards, and embed controls into workflows. They become enablers of secure innovation rather than obstacles to it.

Developers take on greater responsibility for security. Shared standards and automated systems ensure consistency. AI agents become collaborators that augment human capabilities rather than replace them.

The Convergence Ahead

The boundaries between development, security, and operations will continue blurring. Success depends on how effectively organizations adapt processes, embrace automation, and establish clear standards for secure software development.

For development teams, this means building new skills around AI-driven workflows. Consider exploring AI for Software Developers or AI Coding Courses to understand how to work effectively with agentic systems while maintaining security standards.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)