AI-Created Code Is Outpacing Security—And Breaches Are Spiking

AI coding boosts innovation but creates major security risks as traditional tools miss AI-specific threats like data poisoning. Updating security is critical to prevent breaches.

Categorized in: AI News IT and Development
Published on: Jul 08, 2025
AI-Created Code Is Outpacing Security—And Breaches Are Spiking

The dual reality of AI-augmented development: innovation and risk

AI coding is creating a major security challenge because most security teams still use tools built for a world where human-written code was the norm. The rise of AI in software development demands a fresh approach to security.

When JPMorgan Chase CISO Patrick Opet issued an open letter to software suppliers, it wasn’t just a warning — it was a call to action. The 2025 Verizon Data Breach Investigations Report reveals that 30% of breaches now involve third-party components, doubling from the previous year. At the same time, AI is generating a significant portion of code. For instance, Google reports that AI writes about 30% of its code today, yet many security tools remain stuck in the past, designed to handle only human-generated code. This gap isn’t minor — it’s a critical vulnerability.

Cause for concern

Large language models, machine learning, and generative AI are transforming software development by producing many applications businesses depend on daily. The AI coding market is projected to grow from $4 billion in 2024 to nearly $13 billion by 2028. This surge promises efficiency and innovation but also introduces new security challenges.

Every major shift in technology has brought risks that outpaced existing defenses. AI development is no different. AI coding assistants like GitHub Copilot, CodeGeeX, and Amazon Q Developer differ from human developers in key ways. They lack the experience, context, and judgment humans apply when writing secure code. Instead, these tools are trained on massive code repositories, some containing outdated or vulnerable code patterns.

This leads to AI-generated code that may inherit known vulnerabilities or deprecated encryption methods, increasing software supply chain risks. Traditional security tools — such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA) — focus on known vulnerability patterns and component versions. They weren't built to detect AI-specific threats like data poisoning attacks or memetic viruses, which can corrupt machine learning models and produce exploitable code.

New AI security startups are emerging, but they face similar challenges, including limitations on file size and complexity. None can fully analyze AI models for all potential risks, such as malware insertion, tampering, or deserialization attacks.

Another blind spot is that traditional tools usually analyze code during development, not after compilation. This misses malicious changes introduced during build processes or by AI assistants. Examining compiled applications is now crucial to detect unauthorized or harmful inclusions.

What next?

As AI coding tools become more common, security strategies must evolve. AI models can be gigabytes in size and generate complex file types that traditional tools can’t handle. Effective security must include:

  • Verifying the provenance and integrity of AI models used in development
  • Validating the security of AI-suggested components and code
  • Examining compiled applications to spot unexpected or unauthorized additions
  • Monitoring for data poisoning that could compromise AI systems

The integration of AI in software development is inevitable. Software providers and security teams must rise to meet the new supply chain threats AI introduces. Organizations that update their security approaches to include comprehensive software supply chain analysis — from massive AI models to compiled applications — will be the ones to succeed. Those that don’t risk becoming next year’s breach statistics.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide