State AI Sandboxes Enable Safe Experimentation and Real-World Impact
States are creating AI sandboxes so staff can test tools with public data, isolated from core systems. Early wins include better call centers, faster translations and lower costs.

AI Sandboxes Let State Employees Experiment Safely
State IT leaders are giving employees a safe place to test artificial intelligence without putting systems or resident data at risk. California launched its generative AI sandbox in 2023 and earned national recognition in 2025 for the program's practical impact and governance.
Led by CIO Liana Bailey-Crimmins and CTO Jonathan Porat, California's Department of Technology built a model where departments can trial real use cases - from language translation to building inspections - in secure, closed environments. Each sandbox is tailored to a department's needs and a vendor's proof of concept, with 20 terabytes of storage for public, nonsensitive test data.
What an AI Sandbox Does for Government
An AI sandbox is an isolated, cloud-based environment for experimentation. It keeps testing separate from production systems, enforces privacy and security controls, and avoids using state data to train third-party models.
California's approach includes subject matter experts in security, data, infrastructure and architecture, plus user research and testing support from the Office of Data and Innovation. Departments bring the use case; vendors bring the solution; teams validate value and risk before anything moves forward.
California Department of Technology and New Jersey's Office of Innovation have adopted similar principles: public data only, strict safeguards, and no connections to core systems.
How Employees Are Using AI Sandboxes
New Jersey stood up its sandbox in 2024, enhanced it in early 2025, and surveyed usage. Top use cases from state employees:
- Drafting and editing: Clearer emails, memos and reports
- Summarizing documents: Quick comprehension of long or complex materials
- Proofreading: Polished, error-free communication
- Brainstorming and idea generation: Faster content and project development
- Technical support: Help with coding, data analysis and presentation outlines
In California, CDT administers the sandbox, while the Office of Data and Innovation conducts user research on proofs of concept. Departments staff the pilots and work directly with vendors to evaluate fit and outcomes.
Projects Moving from Pilot to Results
California's sandbox is supporting:
- Caltrans Traffic Management Insights and a Vulnerable Road User Safety Assessment
- Department of Tax and Fee Administration call center productivity
- Health and Human Services initiatives for language access and healthcare facility inspections
New Jersey's teams report measurable wins:
- Redesigned call center menus: Better self-service for property tax questions, leading to a 50% increase in calls resolved without an agent
- Analyzed public feedback at scale: Cleared a backlog of 430,000 ratings, 70,000 comments and 22,000 emails to surface common issues
- Plain-language communications: Rewrote Department of Labor emails, improving clarity and producing 35% faster responses
- Language access: Created a Spanish glossary for unemployment insurance; AI-assisted translations with human review sped up production and improved comprehension
How States Build AI Sandboxes
Both California and New Jersey use cloud environments configured for public, nonsensitive data. These sandboxes are separate from state systems, enabling free experimentation while protecting privacy and controlling costs.
Key safeguards include filters against harmful content, protections against attempts to bypass controls, and a strict policy that state data does not train external models. New Jersey credits prior investment in internal capability for enabling statewide access within weeks.
Security and Oversight
California provides isolated environments configured to state security standards and assigns a senior cybersecurity administrator to each sandbox. Access requires adherence to the state's cloud security guidance, and monitoring is coordinated with department and vendor security teams.
New Jersey embeds training directly in the tool. The course - now used by more than 25 states and localities - reinforces safe, privacy-preserving and bias-aware use. In its first year, about 20% of employees used the AI assistant, logging over 500,000 prompts and an 80%+ satisfaction rate. Cost averaged about $1 per user per month, compared with ~$20 for commercial licenses, saving millions.
Practical Steps for Your Agency
- Define simple, high-volume use cases (summaries, drafts, translations) and restrict to public data
- Stand up an isolated cloud environment with no connections to production systems
- Set clear guardrails: model access, logging, filters, human review and vendor obligations
- Pair product teams with security, data and legal; include user research to validate usefulness
- Provide short, mandatory training focused on privacy, equity, safety and policy compliance
- Pilot with measurable targets: time saved, resolution rates, response times and cost per user
- Keep humans in the loop for sensitive decisions, translations and public-facing outputs
If your team needs structured upskilling for safe, effective AI use in government roles, explore curated learning paths by job function at Complete AI Training.
Key Takeaways
- AI sandboxes let public-sector teams experiment safely with real use cases and real (public) data
- Isolated cloud environments, strict filters and no training of third-party models on state data are essential
- Early wins include better call center self-service, faster resident communications and accelerated language access
- Training and governance drive adoption, quality and cost savings at scale