Updated 06:00 EST / December 09, 2025 - Sonatype launches Guide to secure AI-assisted development
Sonatype has introduced Sonatype Guide, a developer tool built to make AI-assisted software development faster, safer, and more efficient. It serves as an intelligent backbone that steers coding assistants toward secure, high-quality open-source components and keeps dependencies healthy over time.
Why this matters
AI coding assistants are trained on public data that can be months or years out of date. The result: they often recommend vulnerable, low-quality, or even imaginary packages. Sonatype's upcoming study reports that leading LLMs hallucinate packages up to 27% of the time-wasting tokens, creating rework, delaying delivery, and adding avoidable security risk.
Early results from enterprise testing
Pre-launch users reported more than a 300% improvement in security outcomes while reducing total security remediation. They also cut dependency-upgrade costs by more than 5x versus the leading competitive approach, measured in both spend and developer hours.
How Guide fits into your workflow
Guide works alongside popular AI coding assistants-GitHub Copilot, Google Antigravity, Claude Code, Windsurf, IntelliJ with Junie, Kiro from AWS, and Cursor-so teams can keep existing habits while raising the bar on dependency quality and safety.
Key capabilities
- MCP server for coding assistants: Intercepts package recommendations in real time and routes developers to secure, reliable versions before code hits the repo. Learn more about the Model Context Protocol here.
- Enhanced OSS search: Instant, trustworthy package decisions without hunting across docs and registries.
- Enterprise-grade API: Complete, unrestricted, backward-compatible access to reliable component data.
- Built on Sonatype Intelligence: Real-time signals on open-source quality, security, and project health that flag vulnerabilities, deprecations, and malicious packages early.
What Sonatype is saying
CEO Bhagwat Swaroop said organizations want AI-driven productivity without giving up security or long-term maintainability. He described Guide as bringing discipline and intelligence to AI-assisted development by steering assistants to secure, reliable components and automating dependency work that slows teams down-signaling a meaningful step forward for customers and the industry.
What this means for engineering leaders
- Cut noise from AI-generated dependency suggestions and standardize on safe defaults.
- Lower supply chain risk by blocking malicious or deprecated packages before they land.
- Reduce token waste from hallucinated packages and shrink remediation backlog.
- Improve long-term maintainability with automated dependency hygiene.
- Back decisions with auditable, enterprise-grade data and policy guardrails.
Practical next steps
- Start with a pilot repo and wire Guide into your existing coding assistant.
- Define policies for allowed package sources, versions, and risk thresholds.
- Track metrics: vuln count per PR, time-to-fix, token spend, and upgrade effort.
- Roll out to more teams once you prove reduced rework and security wins.
If you're formalizing AI coding across your org, consider upskilling your team on secure AI-assisted workflows. A good place to start is this focused pathway: AI Certification for Coding.
For broader context on supply chain risk scoring, see OpenSSF Scorecards.
Your membership also unlocks: