Launch Your Own AI Vibe Coding Platform in One Click
Deploy a full text-to-app vibe coding platform in one click with VibeSDK. Control prompts and costs with isolated sandboxes, live previews, scale-ready deploys, and exports.

Deploy your own AI vibe coding platform in one click with VibeSDK
Text-to-app is here, and it's practical. VibeSDK is an open-source platform that lets you run a full vibe coding experience end-to-end with a single deploy. It gives you model orchestration, isolated build sandboxes, live previews, deployment at scale, and export paths built in.
If you're building for internal teams or shipping this as a capability inside your product, VibeSDK gives you control over prompts, infrastructure, security, costs, and the UX. No black boxes. No glue code marathon.
What you get out of the box
- LLM integration to generate code, build apps, debug, and iterate via Agents SDK.
- Isolated sandboxes per user session to safely run untrusted, AI-generated code (install, build, serve).
- Scale to thousands or millions of user deployments served on Cloudflare's global network.
- Observability and caching across multiple providers via AI Gateway, with cost and latency insights.
- Project templates (stored in R2) to accelerate common app patterns.
- One-click export to a user's Cloudflare account or GitHub repo.
Why build your own platform
Generic builders limit you. With your own stack, you can craft prompt logic for your domain, keep data private, and control the runtime. Internal teams can spin up landing pages, prototypes, and tools without waiting on engineering. SaaS products can ship in-product customization that actually fits their customers.
Step 0: Start fast with VibeSDK
Deploy with one click and you're up. Use the whole platform or pick the parts you need. Configure your preferred models, add your templates, and tune prompts for your use cases.
Step 1: Safe, isolated execution for untrusted code
AI will write apps that install packages, run build steps, and start servers. You do not want that running on shared hosts. Cloudflare Sandboxes give each user a containerized environment tied to their session, with file persistence and strong isolation.
- Create a sandbox: getSandbox(env.Sandbox, sandboxId)
- Write and run: writeFile(...), exec("npm install ..."), exec("node app.js")
Step 2: Generate and write code
VibeSDK orchestrates the workflow: generate project files, install dependencies (bun or npm), and boot a dev server. For a request like "build a React to-do app," it writes the app structure, components, and configs directly into the sandbox.
- callAIModel(...) produces file list and content
- writeFile per asset, with real-time status updates for the user
Use R2-backed templates to skip boilerplate and cut token spend. Expand the template library as your catalog grows.
Step 3: Live preview with a public URL
Start the development process inside the sandbox and expose a preview subdomain so the user can see the app instantly.
- startProcess("bun run dev", { cwd: instanceId })
- exposePort(3000, { hostname: "preview.example.com" })
Step 4: Test, log, fix, repeat
Stream console output, build logs, and errors back into the loop. Feed failures to the model to auto-propose fixes. Show the whole flow to the user: edits, installs, retries, and resolutions.
Deploy at scale: from Sandbox to Region Earth
When the app is ready, package it in the dev sandbox, hand it to a dedicated "deployment sandbox," and run wrangler deploy. Use Workers for Platforms to publish each app into a shared dispatch namespace with per-tenant isolation and unique URLs (e.g., my-app.vibe-build.example.com).
- Zip in dev sandbox, transfer to deploy sandbox, unzip
- wrangler deploy --dispatch-namespace your-namespace
- Each app gets its own Worker instance and public URL
Learn more about Workers for Platforms and Wrangler.
Exportable applications
Users can export their project to their own Cloudflare account or their GitHub repository. That means they can keep building outside your platform without friction.
Observability, caching, and multi-model support
Different models shine at different tasks. VibeSDK routes requests through AI Gateway and, by default, uses Google's Gemini family (gemini-2.5-pro, gemini-2.5-flash-lite, gemini-2.5-flash) for planning, codegen, and debugging. You can mix in providers like OpenAI and Anthropic behind one endpoint.
- Cache common prompts to reduce inference spend (e.g., "build a to-do app").
- Unified metrics: requests, tokens, response times, and cost tracking.
See AI Gateway for routing, caching, and analytics.
Who this is for
- Platform and DX teams enabling non-dev stakeholders.
- SaaS vendors adding text-to-app customization inside their product.
- Agencies/prototypers who need to ship validated concepts quickly.
Get started
- Deploy VibeSDK with one click.
- Set model keys and routing via AI Gateway.
- Add your templates to R2 to standardize outputs.
- Ship previews, then deploy to Workers for Platforms.
If you're exploring gen-code tools to complement your stack, here's a curated list: AI tools for generative code.