Anthropic details the iterative tool design process behind Claude Code

Anthropic published a technical breakdown of how it builds tools for Claude Code, its AI software development assistant. The post details three design iterations, a 20-tool limit, and why better models can break existing tools.

Categorized in: AI News IT and Development
Published on: Apr 11, 2026
Anthropic details the iterative tool design process behind Claude Code

Anthropic Details How It Designs Tools for Claude Code AI Agents

Anthropic published a technical breakdown of how its engineering team builds and refines tools for Claude Code, the company's AI-powered software development assistant. The April 10 post offers rare insight into the iterative process behind effective AI agents and automation systems.

The design philosophy centers on what Anthropic engineer Thariq Shihipar calls "seeing like an agent" - understanding how an AI model perceives and interacts with the tools it receives.

Three Attempts to Get Questions Right

Building Claude's question-asking capability required three iterations. The first attempt added a question parameter to an existing tool, but the model became confused when user answers conflicted with generated plans.

A second try used modified markdown formatting. Claude would append extra sentences, drop options, or abandon the structure entirely.

The solution: a dedicated AskUserQuestion tool that triggers a modal interface and blocks the agent's loop until users respond. The structured approach worked because the model needed to understand how to call it correctly.

When Better Models Break Existing Tools

Anthropic's experience with task management revealed how model improvements can make existing tools counterproductive. Early versions of Claude Code used a TodoWrite tool with system reminders every five turns to keep the model focused.

As models improved, Claude started treating the todo list as immutable rather than adapting when circumstances changed. The team replaced TodoWrite with a more flexible Task tool that supports dependencies and cross-subagent communication.

From Pre-Indexed Search to Self-Directed Discovery

The most significant shift involved how Claude finds context. The initial release used retrieval-augmented generation (RAG), pre-indexing codebases and feeding relevant snippets to Claude. While fast, this approach was fragile.

Giving Claude a Grep tool changed the dynamic. Combined with Agent Skills that allow recursive file discovery, the model went from being unable to build its own context to performing nested searches across multiple file layers to find exactly what it needed.

The 20-Tool Ceiling

Claude Code currently operates with roughly 20 tools. Anthropic maintains a high bar for additions because each new tool represents another decision point the model must evaluate.

When users needed Claude to answer questions about Claude Code itself, the team avoided adding another tool. Instead, they built a specialized subagent that searches documentation in its own context and returns only the answer, keeping the main agent's context clean.

This "progressive disclosure" approach - letting agents incrementally discover relevant information - has become central to Anthropic's design philosophy.

The Takeaway for Developers

Tool design requires constant iteration as model capabilities evolve. What helps an AI today might constrain it tomorrow.

Developers building their own agent systems should expect to redesign tools as their models improve. The structure that worked last month may need replacement next month.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)