Utah's AI regulatory sandbox offers a model for states navigating healthcare technology oversight

Utah's 2024 AI Policy Act lets companies test healthcare AI in real clinical settings under state supervision. A pilot program already allows AI-authorized prescription refills at pharmacies within minutes.

Categorized in: AI News Healthcare
Published on: Mar 21, 2026
Utah's AI regulatory sandbox offers a model for states navigating healthcare technology oversight

Utah's Regulatory Sandbox Offers a Path for States to Safely Deploy Healthcare AI

Utah has demonstrated how states can regulate artificial intelligence in healthcare without blocking beneficial tools outright. The state's approach-using a regulatory sandbox to test AI systems under government supervision-allows clinicians to access promising technologies while minimizing patient risk.

In 2024, Utah's legislature passed the Artificial Intelligence Policy Act, which created the Office of Artificial Intelligence Policy and authorized it to run the sandbox. Companies can apply for temporary relief from certain state rules to test AI systems in real clinical settings.

A pilot program shows how this works in practice. Doctronic, a health technology platform, is using an AI system to help patients with chronic conditions renew prescriptions at participating pharmacies. Instead of waiting days for a physician's office to manually approve a refill-a delay that can cause patients to miss doses-patients scan a QR code at the pharmacy counter.

The system verifies the patient's identity and checks their medication history against a nationwide prescription database. If the request meets state safety criteria for one of roughly 190 low-risk medications, the system authorizes the refill within minutes.

Why Sandboxes Work Better Than Blanket Restrictions

Broad prohibitions block both bad and good AI. A state that bars clinicians from using AI to support treatment decisions can't distinguish between a poorly validated chatbot and a rigorously tested clinical decision support tool. Utah's sandbox creates a path for companies to prove what their tools can do under real conditions.

The sandbox also lets regulators learn before writing new rules. Experience reveals actual failure points rather than hypothetical ones. In the Doctronic case, the state can measure refill timeliness, patient access, safety outcomes, and costs to determine exactly where the tool improves care and where safeguards are needed.

Outside a supervised pilot, a pharmacist who relies on an AI-generated refill authorization could risk violating scope-of-practice rules, because most pharmacy laws assume a physician personally approves every prescription renewal. This ambiguity discourages clinicians from using new tools even when they appear safe.

Utah's regulatory mitigation framework addresses this directly. The state grants a safe harbor, committing not to pursue enforcement actions against pharmacists or physicians who rely on AI authorizations within the pilot's approved parameters. Companies like Doctronic must carry malpractice insurance that explicitly covers the AI's clinical outputs, closing the accountability loop.

Broader Benefits Beyond AI

Running a sandbox stress-tests the regulatory framework itself. The Doctronic pilot reveals not just how well the AI performs, but where existing prescription renewal processes are slow or unnecessarily burdensome. Testing an alternative workflow under supervision shows which steps meaningfully protect patient safety and which simply add delay.

That insight improves healthcare processes more broadly and identifies where regulation can better support efficient, high-quality care.

The Choice Ahead for States

Responsible AI governance requires creating a process to evaluate tools, not prohibiting them outright. States that build systems for supervised experimentation will be better positioned to protect patients while improving care. Those relying on restrictions alone will struggle to do either.

Healthcare professionals interested in how AI regulation develops should monitor state-level policy. For those involved in healthcare policy or administration, understanding Utah's approach provides a practical framework for balancing innovation with safety. Learn more about AI for Healthcare or explore how policymakers can approach AI governance through our AI Learning Path for Policy Makers.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)