Mistral launches AI Studio: a production-grade workspace for building and running AI at scale
Mistral just introduced AI Studio, a web-based platform built for teams that need to ship AI into production, not just test prompts. It unifies building, evaluation, deployment, and governance across Mistral's proprietary and open-weight models - all with E.U.-native hosting options. It's the successor to "Le Platforme," which is being sunset.
The timing is clear: while others make studios friendlier for hobby projects, Mistral is targeting enterprise workflows, observability, and controllable operations. If your stakeholders ask for measurable impact, guardrails, and versioned releases, this is built for you.
What you get
- Unified workspace: build agents and apps, evaluate with judges, deploy with promotion gates, and monitor live traffic.
- Observability: Explorer, metrics, dashboards, lineage, and dataset creation from production usage.
- Agent Runtime: stateful, fault-tolerant execution (on Temporal) with telemetry for long-running and retried tasks.
- AI Registry: system of record for models, datasets, judges, tools, and workflows with versioning and access control.
- First-class RAG workflows: ingestion, retrieval, and augmentation built in (RAGWorkflow, RetrievalWorkflow, IngestionWorkflow).
- Integrated tools: code execution, image generation, web search, and premium news sources.
- Flexible deployment: hosted, cloud partner, self-hosted open weights, or enterprise-supported self-deploy.
Model catalog at a glance
The Studio includes a versioned catalog across text, code, multimodal, speech, and OCR. Even for open-weight models, Studio access runs through Mistral's inference and billing.
- Proprietary text: Mistral Large, Medium, Small, Tiny.
- Open-weight text: Open Mistral 7B, Open Mixtral 8×7B, Open Mixtral 8×22B, Ministral 8B.
- Code: Codestral 2501 (open).
- Multimodal: Pixtral 12B, Pixtral Large.
- Speech/audio: Voxtral Small, Voxtral Mini, Voxtral Mini Transcribe 2507.
- OCR: Mistral OCR 2503.
- Additional entries: Magistral Medium/Small and Devstral variants appear in the catalog.
Closing the prototype-to-production gap
Most teams can build a demo. Few can run AI like software. AI Studio connects creation, observability, and governance so you can track changes, explain regressions, and ship with confidence.
- Observability: filter traffic, inspect runs, score outputs with judges, and auto-build datasets from real usage. Lineage ties results to exact prompts, models, and datasets.
- Agent Runtime: each agent runs in a stateful, fault-tolerant system built on Temporal, with execution graphs and full telemetry feeding Observability.
- AI Registry: versioning, access control, audit trails, and promotion gates across models, tools, datasets, judges, and workflows.
Result: measurable improvements instead of guesswork.
RAG without the hype
RAG is baked into the runtime through named workflows and components. Ingest documents, index them, retrieve context, and ground model responses in your data. Because it's integrated with observability and the registry, you can audit every step and measure impact.
Interface and developer experience
A left-hand nav with Create, Observe, and Improve guides you from prompt testing to agents to fine-tuning. The Playground lets you pick a model, tune parameters (temperature, max tokens), and enable tools in one place. You can try it free, but you'll need a phone number to receive an access code.
Built-in tools and function calling
- Code Interpreter: execute Python for analysis, charts, and reasoning.
- Image Generation: create visuals from prompts.
- Web Search: pull current information in real time.
- Premium News: access verified sources for fact-checked context.
Combine these with function calling and your agent can search, fetch financials, run Python, and return a chart in one workflow.
Deployment choices
- Hosted in Studio: pay-as-you-go APIs with workspace management.
- Cloud partner: run through major cloud providers.
- Self-deploy open weights: run on your infra with vLLM, TensorRT-LLM, llama.cpp, or Ollama.
- Enterprise-supported self-deploy: support for both open and proprietary models with security and compliance help.
Safety, guardrails, and moderation
Guardrails can be applied at model and API layers, with a system prompt focused on care, respect, and truth. The Mistral Moderation model (based on Ministral 8B) flags sexual content, hate, violence, self-harm, and PII. Teams can add self-reflection prompts to classify outputs against company policies like fraud or physical harm.
Who should care
- Platform and infra teams building an internal AI stack with E.U. residency needs.
- Applied ML and data teams moving from notebooks to governed services.
- App teams shipping agents and RAG-backed tools to business users.
- Security and compliance teams that need audit trails and access control.
Practical next steps
- Request access to the private beta on Mistral's site and review the docs.
- Stand up a small agent in the Playground, then move it into the Agent Runtime.
- Turn on Observability. Define judges and start scoring outputs.
- Build datasets from real traffic and run Campaigns to compare model versions.
- Prototype a RAGWorkflow with a narrow corpus and measure lift.
- Pick a deployment path early and plan security reviews around the AI Registry.
Availability
AI Studio enters private beta on October 24, 2025. Enterprises can sign up on Mistral's site to try the platform, explore the model catalog, and test observability, runtime, and governance before general release.
Helpful links
Need structured upskilling for your team?
If you're rolling out production LLM systems and want focused training for engineers, see our AI certification for coding.
Your membership also unlocks: