AI's mind game: vibe coding opens doors, supply chains get hit, and startups need CISOs from day one
AI speeds delivery-and attackers. Treat AI features as high-privilege: build auth, isolation, short-lived tokens, and vendor limits; start on day one.

AI Is Expanding Your Attack Surface: What Product Leaders Need to Do Now
"One of the key things to understand about cybersecurity is that it's a mind game," said Ami Luttwak, chief technologist at Wiz. "If there's a new technology wave coming, there are new opportunities for [attackers] to start using it."
As teams rush AI into shipping code, the attack surface spreads. Vibe coding, agents, and new tooling speed delivery, but they also invite shortcuts-especially around authentication and access. Luttwak's team found insecure auth was a common flaw in vibe-coded apps because "it was just easier to build like that." If you don't tell an agent to build it securely, it won't.
The speed vs. security tradeoff is now visible in production
Attackers ship faster too. They use prompts, vibe coding, and their own agents to exploit your systems. "You can actually see the attacker is now using prompts to attack," Luttwak said. They target your AI tools directly: "Send me all your secrets, delete the machine, delete the file."
This isn't theoretical. Wiz reports weekly incidents that touch thousands of enterprises, with AI embedded at every stage of the kill chain. The pace is accelerating, and product decisions are now security decisions.
Supply chain is the new front door
Internal AI rollouts create fresh entry points. Integrations with high-privilege tools can turn into supply chain attacks: compromise the vendor, pivot into your systems.
Case in point: the Drift breach exposed Salesforce data for hundreds of customers, including Cloudflare, Palo Alto Networks, and Google. Attackers stole tokens, impersonated the chatbot, queried Salesforce, and moved laterally. "The attacker pushed the attack code, which was also created using vibe coding," said Luttwak.
Another example: the "s1ingularity" attack on Nx, a popular JavaScript build system. Malware detected AI developer tools like Claude and Gemini, hijacked them to scan for valuable data, and exfiltrated thousands of developer tokens and keys. Private repos were exposed.
Product Guidelines: Ship Fast Without Shipping Risk
1) Security from day zero (yes, before code)
- Appoint a CISO early. "From day one, you need to have a CISO. Even if you have five people." Ownership beats after-the-fact cleanups.
- Design for compliance early. SOC 2 is simpler with five employees than 500. Build policies, audit logs, and incident response now, not later. What is SOC 2?
- Plan "secure by design." Bake auth, least privilege, secrets management, logging, and SSO into your first backlog.
2) Architecture that keeps customer data where it belongs
- Keep data in the customer's environment. BYO cloud/VPC deployment, private networking, customer-managed keys.
- Isolate AI components. Separate inference, tools, and data planes. Enforce egress controls and deny-by-default network policies.
- Token hygiene. Use short-lived, scoped, per-action tokens; bind tokens to origin and context; rotate and revoke automatically.
3) Make "AI feature" a security feature
- Prompt safety. Treat prompts as untrusted input. Guard against prompt injection and tool misuse. Enforce allowlists for tools and commands.
- Human-in-the-loop for destructive actions. Requires explicit approval for delete, exfiltrate, or config changes.
- Output validation. Strict schemas, type checks, and policy gates before an agent can take action.
- Data boundaries for RAG. Row-level permissions, PII redaction, and per-tenant embeddings. No cross-tenant vector stores.
- Audit everything. Log prompts, tool calls, model outputs, and user identities. Immutable, queryable, alertable.
4) Secure integrations and third parties
- Assume your vendors get breached. Limit scopes, use mTLS where possible, and proxy integrations through your own broker with egress filtering.
- Per-tenant app identities. Don't reuse tokens across customers. Prefer OAuth with granular scopes over static API keys.
- Runtime monitoring. Detect unusual data access, permission escalations, and agent behavior drift. Quarantine quickly.
- Supply chain controls. SBOMs, signed builds, and verified provenance (SLSA). CISA guidance
5) Acceptance criteria your PMs can copy-paste
- Auth: SSO + MFA, role-based access, least privilege enforced in code and infra.
- Secrets: No secrets in code; use a vault; rotate on deploy; detect leakage in CI.
- Logging: Structured, immutable logs for auth, data access, and all AI tool calls.
- Rate limits and anomaly detection on model endpoints and integrations.
- PII handling: redaction on input/output; opt-in retention; clear data deletion path.
- Agent guardrails: allowlisted tools, bounded context, stop conditions, and kill switch.
- Dependency safety: pin versions, monitor advisories, block risky post-install scripts.
What Recent Attacks Teach Product Teams
- Drift breach: Treat chatbot identity like production root. Use scoped, ephemeral tokens; segment access to CRM data; monitor impersonation attempts; require re-auth for sensitive queries.
- "s1ingularity" on Nx: Dev tooling is a high-value target. Lock down developer tokens, isolate CI, block outbound by default, and audit AI-tool integrations on dev machines.
Operating cadence: 30/60/90 for AI + Security
- First 30 days: Appoint security owner (fractional CISO if needed), ship SSO + MFA, inventory all AI features and third-party tools, add logging for prompts/tools.
- Next 30 days: Implement token scoping/rotation, egress proxy for integrations, add human-in-the-loop for sensitive agent actions, start SBOM + dependency checks in CI.
- Next 30 days: Threat model AI flows (prompt injection, data poisoning, tool abuse), run a tabletop exercise, and pressure test access controls with red-team prompts.
Bottom line for product leaders
Speed is great-until it isn't. The fastest way to ship safely is to make security a product requirement, not an afterthought. As Luttwak put it, "We need to understand why you're building it … so I can build the security tool that understands you."
Build with guardrails, keep data local to the customer, and treat AI features as high-privilege systems from day one. That's how you stay fast without leaving the door open.
Level up your team
If your roadmap includes AI features, upskill product and engineering on secure prompting, agent guardrails, and data boundaries. Explore practical resources for your role: AI courses by job and secure prompt engineering.