The 2025 State of Application Risk Report: Uncovering AI Vulnerabilities in Software Development

AI boosts code generation but creates new security risks, with 46% of orgs using it in ways that increase vulnerabilities. Better visibility and controls are essential.

Categorized in: AI News IT and Development
Published on: May 10, 2025
The 2025 State of Application Risk Report: Uncovering AI Vulnerabilities in Software Development

The 2025 State of Application Risk Report: Understanding AI Risk in Software Development

Artificial intelligence has become a double-edged sword in application security. AI tools boost developer productivity with fast code generation, but they also introduce new vulnerabilities that can compromise software integrity.

The 2025 State of Application Risk report, based on data from the Legit Application Security Posture Management platform, highlights the AI-related risks present in today’s software development environments.

The AI Visibility Gap

A major source of AI risk comes from a visibility gap. Security teams often don’t know where AI is being used within their development pipelines. When they do find AI in use, it frequently resides in areas without proper security configurations.

A recent survey of 400 security pros and developers found that 98% agree security teams need better insight into how GenAI solutions are applied in development projects.

AI Toxic Combinations

Seventy-one percent of organizations now use AI models in source code development. However, 46% of these apply AI in risky ways that amplify vulnerabilities.

One common risky behavior is generating code with AI in repositories lacking code review or branch protection. This opens the door to introducing licensed or malicious code, potentially causing legal or security issues.

The report reveals that on average, 17% of repositories have developers using AI tools without proper security controls like branch protection or code reviews. This combination creates a fertile ground for vulnerabilities or malicious code entering production.

Another risk is using low-reputation large language models (LLMs). These models can contain hidden malicious code or may exfiltrate sensitive data. Low download counts, few endorsements, or limited community activity on repositories can hint at potential issues.

Especially in sensitive or critical environments, relying on trusted and well-maintained AI models is crucial. Poorly maintained open-source projects often lag in security fixes, making them targets for attacks.

Mitigating AI Risk

To reduce AI-related risks, the report recommends several practical steps:

  • Perform threat modeling focused specifically on AI-related threats to assess potential impacts.
  • Prioritize security when choosing AI models for development tasks.
  • Evaluate the reputation of AI models, their creators, and endorsers carefully.
  • Avoid low-reputation models or conduct thorough analysis to ensure they’re safe before use.
  • Implement tools and processes that provide full visibility into AI usage across your development environment. Know which applications and repositories use GenAI or third-party models from marketplaces like Hugging Face.
  • Create clear policies governing AI use in development, including selecting models and enforcing security controls.

Learn More

For a deeper look at software risk trends in 2025, consider downloading the full 2025 State of Application Risk report. Staying informed helps teams manage AI risks effectively while benefiting from its development advantages.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)