Georgia Tech finds 74 vulnerabilities traced to AI coding tools including Copilot, Claude, and Gemini

Georgia Tech researchers found 74 confirmed security vulnerabilities introduced by AI coding tools across 43,000 public advisories. Claude, Gemini, and Copilot were linked to 14 critical and 25 high-severity flaws.

Categorized in: AI News Science and Research
Published on: Apr 26, 2026
Georgia Tech finds 74 vulnerabilities traced to AI coding tools including Copilot, Claude, and Gemini

Georgia Tech Researchers Identify 74 Security Vulnerabilities Introduced by AI Code Generation Tools

Researchers at Georgia Tech's Systems Software & Security Lab scanned over 43,000 public security advisories and confirmed 74 cases where generative code tools introduced vulnerabilities into production repositories. The findings include 14 critical and 25 high-severity issues traced to Claude, Gemini, and GitHub Copilot.

The research converts a widespread concern into measurable evidence: AI-assisted coding can introduce systematically repeating security defects at scale. Because millions of developers rely on the same underlying models, a single exploitable pattern discovered in one tool's output enables broad scanning and exploitation across many repositories.

How Researchers Traced AI-Generated Vulnerabilities

The team built a detection pipeline that correlates vulnerability entries with git histories to identify which commit introduced each flaw. The system relies on metadata signals including co-author tags, bot emails, and tool-specific signatures to attribute code to AI assistants.

Three vulnerability classes appeared repeatedly across the flagged cases:

  • Command injection
  • Authentication bypass
  • Server-side request forgery (SSRF)

The pattern suggests generative models tend to repeat the same insecure constructs. When millions of developers use identical models and prompts, model-level defects propagate across projects simultaneously.

Detection Gaps and Next Steps

The current metadata-based approach misses cases where developers sanitized or edited commits after generation, removing tool signatures. Researchers plan to build behavioral detectors that identify AI-written code from variable naming conventions, function structure, error handling patterns, and stylistic fingerprints.

The team is also expanding verification pipelines and ingesting additional vulnerability databases to reduce sampling bias in future analysis.

What Security Teams Should Do Now

Practitioners should prioritize scanning repositories for model-derived vulnerability patterns. Security teams need to push vendors for safer generation defaults, hardened code templates, and provenance metadata that persists through commits.

Integration of AI-origin flags into software composition analysis tools and continuous integration pipelines will help teams identify and remediate model-derived risks before code reaches production.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)