AWS Open-Sources Bedrock AgentCore MCP Server to Turn IDE Prompts into Deployable Agents
AWS open-sources an MCP server for Bedrock AgentCore, linking IDE chat to deployable agents. It streamlines refactors, setup, Gateway wiring, deploys, and tests for quicker loops.

AWS open-sources an MCP server for Bedrock AgentCore: ship agents from your IDE chat
AWS released an open-source Model Context Protocol (MCP) server for Amazon Bedrock AgentCore. It gives IDE assistants a direct path from natural-language prompts to deployable agents on AgentCore Runtime.
The server compresses refactor, environment setup, Gateway wiring, deployment, and testing into guided chat steps. Result: fewer CLI hops, less glue code, faster feedback.
What it is
The AgentCore MCP server exposes task-specific tools to MCP clients like Kiro, Claude Code, Cursor, Amazon Q Developer CLI, and the VS Code Q plugin. From the chat surface, your assistant can:
- Refactor an existing agent to the AgentCore Runtime model with minimal edits
- Provision and configure AWS (credentials, roles/permissions, ECR, config files)
- Wire up AgentCore Gateway for tool calls
- Deploy, invoke, and test the agent end-to-end
What it does in your codebase
- Converts entry points to AgentCore handlers
- Adds bedrock_agentcore imports and generates a requirements.txt
- Rewrites direct agent calls into payload-based handlers compatible with Runtime
- Invokes the AgentCore CLI to deploy and exercise the agent, including Gateway tool paths
Install and client support
Setup is a one-click flow from the GitHub repo using a lightweight launcher (uvx) and a standard mcp.json entry. Most MCP-capable clients will pick it up automatically.
- Expected mcp.json locations:
- Kiro: .kiro/settings/mcp.json
- Cursor: .cursor/mcp.json
- Amazon Q CLI: ~/.aws/amazonq/mcp.json
- Claude Code: ~/.claude/mcp.json
- Repository: awslabs "mcp" mono-repo (Apache-2.0)
Architecture: layered context that actually helps
AWS recommends a layered context model so your IDE assistant plans the whole transform→deploy→test loop without manual context switching:
- Start with the agentic client (your IDE assistant)
- Add the AWS Documentation MCP Server
- Layer in framework docs (Strands Agents, LangGraph)
- Include AgentCore and agent-framework SDK docs
- Guide repetitive moves via per-IDE "steering files"
This reduces retrieval misses and keeps the assistant grounded across code, infra, and deployment steps.
Typical developer workflow
- Bootstrap: Use local tools or MCP servers. Provision a Lambda target for AgentCore Gateway or deploy directly to AgentCore Runtime.
- Author/Refactor: Start from Strands Agents or LangGraph. Convert handlers, imports, and dependencies for Runtime compatibility.
- Deploy: Use the AgentCore CLI from the assistant's toolcalls.
- Test & iterate: Invoke in natural language. If tools are needed, integrate Gateway (MCP client inside the agent), redeploy (v2), retest.
Why this matters
Most agent frameworks pull you into cloud plumbing first: credentials, roles, registries, CLIs. This server offloads that to the IDE assistant and narrows the prompt-to-production gap.
Because it's "just" another MCP server, it composes cleanly with existing doc servers and frameworks. Teams standardizing on Bedrock AgentCore get a low-friction entry point and a repeatable workflow instead of ad-hoc scripts.
Quick checklist to try it
- Install via uvx and register the server in mcp.json for your IDE client
- Point the assistant at your current agent repo (Strands/LangGraph welcomed)
- Let the assistant apply handler conversions and add bedrock_agentcore dependencies
- Provision AWS roles, credentials, and ECR via the server's tools
- Deploy with the AgentCore CLI, wire Gateway if needed, run an end-to-end test
Useful links
Level up your skills
If you're building production agents, structured practice helps. Explore practical programs for developers here: AI Certification for Coding.