Trump's Medicare AI Prior Authorization Pilot in Six States Sparks Bipartisan Alarm Over Care Denials
Medicare's WISeR pilot tests AI prior authorization to cut low-value care in AZ, OH, OK, NJ, TX, WA through 2031. Expect savings push, human review vows, and oversight.

Medicare's WISeR AI Pilot: What Government and Insurance Leaders Need to Know Now
The Trump administration will launch a multi-year pilot to test whether an AI-driven prior authorization model can cut Medicare spending by reducing "low-value" care. The program, called WISeR (Wasteful and Inappropriate Service Reduction), starts Jan. 1 and runs through 2031 in Arizona, Ohio, Oklahoma, New Jersey, Texas, and Washington.
This is a major shift for traditional Medicare, which has largely avoided prior authorization. Private insurers-especially in Medicare Advantage-use it widely. Expect policy, operational, and political fallout.
Scope and Targets
- Applies to: Selected services initially including skin and tissue substitutes, electrical nerve stimulator implants, and knee arthroscopy. Additional services could be added.
- Exclusions: Inpatient-only, emergency services, or services that would pose substantial risk if delayed are out of scope for the AI model's assessment.
- States: AZ, OH, OK, NJ, TX, WA.
How the Model Will Be Used
- AI-assisted prior authorization: The algorithm flags approvals/denials for specified services.
- Human-in-the-loop: CMS says a "qualified human clinician" will review cases before any denial is issued.
- Vendor incentives: Vendors can share in savings. CMS says payments won't be tied to denial rates and guardrails will protect medically appropriate care.
Critics argue shared savings still create a financial incentive to reduce care. "Shared savings arrangements mean that vendors financially benefit when less care is delivered," said Jennifer Brackeen of the Washington State Hospital Association.
The Tension You'll Need to Manage
- Cost control vs. access: Prior authorization can reduce waste and fraud-but can also delay or deny clinically necessary care.
- Signal vs. noise: AI can speed decisions, yet "meaningful human review" is ill-defined. Some reports have shown near-instant "reviews" in private plans, raising enforcement questions.
- Trust vs. opacity: Contractors will assess outcomes, which invites conflict-of-interest concerns and demands independent oversight.
"The plan is not fully fleshed out," said policy researcher Vinay Rathi, calling the measures "messy and subjective." Rep. Suzan DelBene called the approach "hugely concerning," while Rep. Greg Murphy warned about overreach into physician judgment.
Policy and Legal Undercurrents
- Oversight: CMS promises strict monitoring to ensure AI supports-not replaces-clinical decision-making and follows Medicare rules.
- Denial patterns: Researchers note that algorithms are commonly tuned to scrutinize higher-cost services more aggressively, which can shift risk to patients and providers.
- Congressional pressure: A bipartisan House measure seeks to block funding for the pilot in the FY 2026 HHS budget.
Operational Risks for Agencies, Plans, and Providers
- Definition of "meaningful human review": Set clear thresholds for time spent, evidence evaluated, and accountability for final decisions.
- Appeals management: Long timelines can become de facto denials. Track appeal volume, overturn rates, and time-to-resolution.
- Patient safety exceptions: Ensure automatic bypass or fast-track logic where delays could cause harm.
- Bias and drift: Monitor model performance by demographic, geography, and provider type. Retrain and recalibrate on schedule.
- Fraud controls: Balance fraud prevention with clinical nuance to avoid blanket denials of legitimate care.
What Leaders Should Do Now
- Codify guardrails: Publish clinical policies, exception criteria, and escalation paths. Make them auditable.
- Define human review standards: Minimum review time, required documentation, evidence sources, and sign-off responsibility.
- Build audit trails: Log model inputs, outputs, reviewer notes, timestamps, and final decisions for every case.
- Measure the right outcomes: Savings, yes-but also denial appropriateness, overturn rates, avoidable harm, readmissions, and patient-reported delays.
- Provider communication: Plain-language decisions with rationale, citations, and clear appeal steps. Standardize templates.
- Model governance: Independent validation, bias testing, version control, and change management before each model update.
- Contracting discipline: Avoid incentives that implicitly reward higher denial volume. Tie payment to validated appropriateness and patient safety metrics.
- State coordination: Align with state utilization review and prompt-pay laws to reduce conflicts.
Key Unknowns
- Exact model features, thresholds, and training data sources.
- How CMS will judge "medically appropriate" denials and what triggers corrective action.
- The scope of services added later-and how quickly the list expands.
Context You'll Hear About
- Physicians report increasing denials linked to automation. See the AMA's prior authorization survey for trends and patient impact: AMA Prior Authorization Survey.
- Investigations have questioned whether "human review" is meaningful in some insurer workflows: ProPublica reporting on automated reviews.
Bottom Line
WISeR tests whether AI-assisted prior authorization can cut waste without harming patients. The policy bet is that guardrails, human review, and oversight will be enough.
For agencies and plans, the risk isn't the algorithm-it's weak governance. Set clear rules, prove "meaningful" review, and make appeals fast and fair. For providers, document clinical necessity tightly and track denials. For both, transparency will decide trust.
Optional Resource
Building internal AI literacy for policy, compliance, and operations can reduce risk. Explore role-based programs: Complete AI Training - Courses by Job.