AI-Driven Prior Authorization Is Coming to Medicare: What Insurance Teams Need to Do Now
The federal government will pilot AI-supported prior authorization in traditional Medicare across six states: Arizona, Ohio, Oklahoma, New Jersey, Texas, and Washington. The program, called WISeR (Wasteful and Inappropriate Service Reduction), starts Jan. 1 and runs through 2031.
Expect expanded prior authorization on select services that Medicare labels prone to "fraud, waste, and abuse," including skin and tissue substitutes, electrical nerve stimulator implants, and knee arthroscopy. Emergency care, inpatient-only services, and cases where delays could pose major risk are excluded.
What's changing
Medicare, which has historically used prior authorization sparingly, will test AI to flag low-value care and reduce spend. CMS says a qualified human clinician will review denials and vendors cannot be paid based on denial rates. Oversight and safeguards are promised, though details on measurement and accountability remain thin.
The move lands as private insurers pledge to reduce prior authorization burden due to delays and member distrust. The tension: public calls to fix prior authorization while Medicare expands it through AI.
Why it matters
Public opinion is against prior authorization, and physician groups warn that AI could increase denials and harm. Some researchers question whether "meaningful human review" is truly happening across the industry, and whether shared-savings structures nudge systems to block costly care.
For insurers and TPAs, this pilot sets a template regulators and plaintiffs will study. Decisions, documentation, and turnaround times will be under a microscope.
Operational implications for insurers and TPAs
- Expect higher request volumes and appeal activity in the six pilot states. Staff appropriately.
- Publish clear clinical criteria for the targeted services. Keep medical necessity policies tight and accessible.
- Guarantee clinician sign-off on all denials. Log identity, timestamps, and rationale for auditability.
- Set strict SLAs for decisions; fast-track urgent cases consistent with CMS exclusions.
- Simplify appeals. Plain-language denial letters with specific, evidence-based reasons and next steps.
AI governance essentials
- Document model purpose, training data sources, update cadence, and known limitations.
- Define "meaningful human review" (minimum review time, required evidence considered, peer-to-peer availability).
- Monitor safety signals: denial overturn rates, adverse events during pending status, provider complaints.
- Align incentives to accuracy and appropriateness, not raw cost avoidance or denial volume.
- Create escalation and shutdown rules if metrics indicate patient risk or systematic error.
Compliance and oversight
- Map state and federal requirements for prior authorization disclosures, timelines, and clinical criteria transparency.
- Maintain reproducible decision records (clinical evidence cited, guidelines, reviewer credentials).
- Update BAAs and vendor contracts to cover AI use, PHI handling, audit rights, and explainability standards.
- Prepare for audits with traceable model versions, decision logs, and sampling plans for quality review.
Provider and member experience
- Offer clinician-to-clinician reviews within 24-48 hours for contested cases.
- Publish turnaround times by service; provide real-time status in portals.
- Create an "urgent pathway" aligned to CMS exclusions to prevent risky delays.
- Track member harm proxies: ER visits during pending, therapy gaps, abandonment due to delay.
Key metrics to track
- Initial approval and denial rates by service and provider.
- Average time to initial decision and to final resolution.
- Appeal rates and overturn rates (internal and external review).
- Adverse event indicators while authorization is pending.
- Provider abrasion: call volume, portal messages, complaint trends.
- Net impact: cost avoidance versus downstream cost increases.
Risk scenarios to plan for
- High-cost care auto-flagged and delayed; implement expedited clinical review and second-look protocols.
- Appeal windows exceed clinical need; adopt auto-approval thresholds if SLAs are missed.
- Opaque vendor models; require transparency on features, drift monitoring, and error analyses.
What's still unclear
How vendors will be evaluated, how "meaningful human review" will be enforced, and how savings will be separated from inappropriate denials. Researchers warn of subjective measurement and contractor self-assessment, which could cloud results.
Members of Congress in both parties have raised concerns and proposed limiting funds. Expect ongoing scrutiny of clinical appropriateness, fairness, and patient outcomes.
Action checklist for the next 90 days
- Identify impacted lines of business in AZ, OH, OK, NJ, TX, WA; map affected CPT/HCPCS codes.
- Update criteria, provider portals, and member materials to reflect exclusions for emergencies and inpatient-only services.
- Stand up an AI oversight committee; define thresholds, escalation rules, and documentation standards.
- Train medical directors and reviewers; standardize rationale templates and peer-to-peer scripts.
- Run stress tests on turnaround times and appeals; simulate public and regulator inquiries.
- Engage top providers with clear SLAs and clinical criteria; establish rapid resolution lanes.
- Launch dashboards for the metrics listed above with weekly review cadences.
Context and resources
Physician groups report growing concern about AI's impact on prior authorization denials and patient harm. Public polling shows broad dissatisfaction with prior authorization processes.
Skill up your team
If your organization is deploying or overseeing AI in utilization management, train reviewers, analysts, and product teams on practical AI concepts, risk controls, and governance frameworks.
Your membership also unlocks: