Eight Laws for AI Developer Platforms Built for Humans and Agents
AI agents now share the keyboard with developers. These 8 laws cover AX vs DX, model-ready docs, outcome-based pricing, governance, and ecosystem plays for durable platforms.

Developer laws in the AI era: 8 rules for building for humans and agents
AI agents are now first-class users of your platform. Developers are still here, but they share the keyboard with autonomous systems that read your docs, call your APIs, and take action across your stack.
Below are eight work-in-progress laws for product leaders building platforms for both humans and agents. They focus on pricing, product design, and defensibility - with direct input from teams across the AI tooling ecosystem.
Law #1: Agent Experience (AX) matters as much as Developer Experience (DX)
DX sets the floor; AX sets the ceiling. Agents can't infer intent from vibes - they need structured, predictable interfaces, complete schemas, consistent errors, and stateful workflows.
OpenAPI specs, clean SDKs, and well-typed events lift AX. Multi-step flows need session persistence and real-time feedback via streams. Netlify-style deployment agents must maintain state across CI/CD and surface immediate build signals - most legacy tools weren't built for that.
Protocols like the Model Context Protocol (MCP) change how tools connect with agents inside IDEs. Teams are running MCP servers so agents in Cursor or Claude Code can fetch live data and take action. Dashboards are becoming APIs; companies like Recall expose dashboard actions so agents can help resolve issues without manual clicks.
"Instead of viewing DX as an antagonist to AX, we discovered that all the changes we made to DX are actually enhancing AX... Turns out, that made a huge difference in getting agents to use Resend, too." - Zeno Rocha, CEO of Resend
- Publish complete OpenAPI schemas with typed errors and examples.
- Support session persistence and WebSocket event streams for long-running tasks.
- Expose dashboard operations via API; consider MCP tools for agent access.
Law #2: Documentation must serve models as well as humans
Human-friendly docs aren't always model-friendly. Agents struggle with rich HTML, navigation chrome, and outdated guidance. They need concise, current, single-source references.
Adopt docs-as-code. Keep an authoritative markdown source, generate the site, and ship machine-readable artifacts (OpenAPI, JSON Schema, and an llms.txt index). Pair this with audit logs for both human and agent actions, and apply Generative Engine Optimization so models retrieve the right answer fast.
"Developers want a polished docs site, while agents need clean markdown to parse... documentation is first written in markdown and then published as developer-friendly websites and machine-readable files like llms.txt." - Co-founders of Fern
- Create a markdown-first pipeline and publish OpenAPI/schemas alongside the site.
- Version everything. Provide changefeeds and deprecation schedules agents can read.
- Continuously test docs with agents; log retrieval failures and fix them.
Law #3: Pricing strategies remain focused on reducing friction to onboard
Inference costs make marginal users expensive. Pricing has to reflect cost-to-serve and value delivered without blocking adoption.
We see three patterns: usage-based with organic expansion; seat-based with usage overage for predictability; outcomes-based tied to completed workflows. Upsell triggers differ for traditional devs vs vibe coders, so place paywalls where they feel value, not where you want to meter.
"Value should map to outcomes⦠the paywall belongs where the system is delivering measurable value." - Spiros Xanthos, CEO of Resolve
- Keep a strong free tier and fast-start templates; remove setup friction.
- Meter on workflow outcomes, not just tokens or requests.
- Instrument cost per action and LTV by user cohort (dev, ops, vibe coder).
Law #4: AI developer tooling spend is breaking out of traditional budgets
Enterprises are carving out AI budgets. Many are trading headcount for agents, seeking skill amplification and faster delivery, not only savings.
Buying is multi-stakeholder: CIOs, platform leaders, product, and individual developers all weigh in because agents need guardrails. Success metrics skew toward time-to-first-value, prototype speed, and measurable outcome gains. Tools like Cursor track suggestion rates and acceptance to quantify assist impact.
- Offer enterprise controls: policy, audit, data boundaries, and human-in-the-loop gates.
- Ship ROI calculators and outcome dashboards executives can trust.
- Support bottoms-up adoption with a clear path to centralized governance.
Law #5: The definition of developer continues to widen dramatically
Vibe coding and AI-assisted builders are shipping apps without caring about the code. This group gets stuck moving from prototype to production and needs rails.
Non-technical teammates can now create demos, sample apps, and technical content with the right tools. Domain knowledge and system thinking matter more; coding becomes orchestration. "Today there are 17 million JavaScript developers... expect 100 million in the next 10 years." - Mathias Biilmann Christensen, CEO of Netlify
- Provide scaffold-to-prod flows: templates, one-click CI/CD, safe staging, and rollbacks.
- Design onboarding by skill level with explainers, guardrails, and guided debugging.
- Expose higher-level primitives (workflows, actions, playbooks) beyond raw APIs.
Law #6: Stronger network effects incentivize early ecosystem positioning
Agent-to-agent network effects are real. Agents compound value when they compose and communicate across tools via shared protocols like MCP.
Data network effects intensify: context-rich systems solve more tasks. At the same time, lock-in weakens because agents can switch integrations faster. "It is now easier than ever to switch between different APIs... you have AI agents help you." - David Gu, CEO of Recall
"AI weakens non-objective based network effects and strengthens objective ones... if Stripe is more reliable, all AI agents will pick that." - Nikhil Gupta, CTO of Vapi. "An agent-first GTM is about proof, not hype... deliver outcomes that matter." - Spiros Xanthos, CEO of Resolve
- Publish agent-facing tools early (MCP tools, action catalogs) and reference implementations.
- Compete on measurable quality: latency, reliability, accuracy, and uptime SLAs.
- Invest in ethical data advantages and feedback loops that improve outcomes over time.
Law #7: Platform engineers are evolving into autonomous flow architects
Platform teams are moving from infra operators to designers of autonomous engineering flows. They define oversight, guardrails, and the UX for technical teams.
As agents write more code, engineers become system owners. The bottleneck shifts from writing to verifying. Testing, monitoring, and visual validation move to the front of the line. "By leveraging managed platforms like Render, platform engineers can focus on higher-value automation." - Anurag Goel, CEO of Render
- Standardize agent action catalogs with policy checks, approvals, and audit trails.
- Automate verification: contract tests, sandbox runs, canaries, and real-time drift detection.
- Centralize observability for human and agent actions; document behavior, not just code.
Law #8: Defensibility is about continuous evolution and platform control
Platforms win by owning key entry points, compounding proprietary context, and evolving faster than the market. Coordinate multiple models, data sources, and workflows to take action safely.
The strongest platforms run real-time feedback loops from agents and customers. They plan Act 2 and Act 3 early, expand where they control behavior, and iterate quickly. "Build something that will continuously evolve... manage it, control it, have opinions, and iterate from core building blocks." - Zohar Einy, CEO of Port
- Secure strategic entry points (editor, repo, runtime, identity) and expand from there.
- Build a data flywheel with consent, quality controls, and measurable outcome gains.
- Orchestrate multi-model agents; swap models based on task, cost, and accuracy.
What to do next
- Audit AX: Can an agent onboard, act, and recover from errors without a human?
- Move docs to markdown-first with generated schemas, versioning, and an llms.txt index.
- Set pricing experiments around outcomes and remove onboarding blockers.
- Stand up governance: policy, audit, human-in-the-loop, and quality gates for agents.
- Ship reference MCP tools and measure objective performance relentlessly.
Further recommended reading
- Roadmap: Developer tooling for Software 3.0
- How to activate the developer relations flywheel with the why, try, buy, fly method
- Scaling your engineering team from one to 50 and beyond
- Research to Runtime
- AI-powered R&D-vibecoding, taste, and the evolution of full-stack design
- AI agent autonomy scale-a new way to understand use case maturity
- Seven product strategies to prevent churn for B2B AI app leaders
- What's driving the Data Shift Right market?
- Eight laws for developer platforms (2017)
- New developer laws that are harder, better, faster, stronger (2019)
If you're upskilling product and platform teams on AI agents, see curated programs by role at Complete AI Training. For new releases across AI tooling and workflows, check the latest courses here.