Engineers are "more important than ever"-even as full automation looms
Two things can be true at once: AI is writing a lot of code, and great engineers matter more than ever. That's the message from Anthropic's team as Claude Code ramps up inside the company.
For product leaders, the takeaway is simple: staff for orchestration, not replacement. Treat AI as a force multiplier, design your systems around it, and keep humans accountable for direction, architecture, and quality.
What Anthropic insiders are saying
Mike Krieger, who leads Anthropic Labs, said at a recent Cisco AI Summit that teams are leaning hard on Claude for development, QA, and product work. "Claude is now writing Claude⦠Right now for most products at Anthropic it's effectively 100% just Claude writing."
Boris Cherny, the creator of Claude Code, pushed back on the idea that engineers are obsolete. "Someone has to prompt the Claudes, talk to customers, coordinate with other teams, decide what to build next⦠great engineers are more important than ever."
Hiring supports that stance. Anthropic still lists dozens of open roles across product engineering, design, and infrastructure-evidence that AI is changing the work, not erasing it. You can explore the product here: Anthropic Claude.
The tension: near-term leverage vs. fast automation
There's a caveat. Anthropic CEO Dario Amodei recently suggested the curve is steep: "We might be six to 12 months away from when the model is doing most, maybe all, of what software engineers do end-to-end."
Cherny framed it this way: he's describing what's working now; Amodei is pointing to what could be next. Product teams should plan for both realities-AI-heavy workflows today, and much higher automation sooner than expected.
What this means for product development
- Shift engineers from "type code" to "own outcomes." Prompting, system design, decomposition, and acceptance criteria become core skills.
- Make specs double as evals. Turn PRDs into testable prompts, datasets, and golden examples that every model and change must pass.
- Increase the surface area of iteration. Smaller tickets, tighter feedback loops, more experiments shipped safely.
- Keep humans in the loop for architecture, tradeoffs, customer context, compliance, and final accountability.
Team and hiring adjustments
- Staff for orchestration: tech leads who can prompt well, review AI output, and align cross-functionally.
- Hire "AI-native" developers. Evaluate with pair-coding sessions using assistants and real repo tasks.
- Favor T-shaped builders: strong fundamentals, plus fluency with model prompting, tooling, and evaluation.
- Redefine ladders. Reward impact: shipped outcomes, prompt libraries, eval suites, and systems thinking-not lines of code.
Process you can implement now
- Guardrails at the repo level: secret scanning, license checks, dependency policies, and auto-generated tests.
- AI + human code review: require model suggestions to pass linters, security checks, and a final human sign-off.
- Eval harnesses: golden tasks per service; track pass rates by model/version before merge.
- Progressive delivery: canaries, blast-radius limits, auto-rollbacks, and runtime anomaly alerts.
- Metrics that matter: DORA baselines, defect escape rate, on-call load, change failure rate, and model-assisted throughput.
- Cost controls: token budgeting, caching, offline inference where sensible, and cost-per-outcome tracking.
Risks to manage deliberately
- Incorrect or "confident wrong" code (e.g., hallucinated APIs, unsafe patterns).
- Security and privacy exposure through prompts, logs, or training data leakage.
- License contamination and unclear IP provenance.
- Vendor dependency; keep a multi-model strategy and migration plan.
Near-term plan for product teams
- Pick two candidate services and run a 6-week pilot with Claude-assisted development. Measure velocity, quality, and incident trends.
- Create prompt playbooks per domain (backend, mobile, testing, docs). Store them with examples and failure modes.
- Build a minimal eval suite for each repo: golden prompts, unit tests, security checks. Make it blocking.
- Run weekly model reviews: cost, accuracy, drift, and developer satisfaction. Ship improvements every Friday.
- Upskill the team. Start with an AI Learning Path for Software Engineers and set expectations for reskilling across roles.
If automation accelerates faster than expected
- Keep humans focused on direction, constraints, and quality gates while the model handles bulk generation.
- Double down on evaluation, integration testing, and runtime safeguards.
- Shift PM and design time into tighter problem framing, user research, and outcome definition.
Bottom line
AI is taking on more of the typing. Your edge is deciding what gets built, how it fits together, and how fast you can iterate safely. For now, that makes great engineers-and the product leaders who enable them-more important than ever.
Further reading
Your membership also unlocks: