AI vs. AI: Patients are using bots to fight denials. Insurers need a smarter playbook.
Patients and physicians are now using AI to appeal denials, decode policies, and scrutinize bills. That means your claims, prior auth, and utilization management processes will increasingly meet AI-generated challenges-often fast, often persistent, and sometimes persuasive.
The result is a tug-of-war: insurer AI built for speed and scale vs. consumer AI built for pressure and precision. Without clear rules, better data practices, and human oversight, both sides risk bad decisions, higher costs, and damaged trust.
What patients are using right now
Tools like Sheer Health connect to insurance accounts, parse EOBs, and flag issues such as coding errors. Nonprofits like Counterforce Health use models to read denial letters, compare them to plan language and medical literature, then draft customized appeals.
Some consumers are turning to general chatbots for health answers and letter drafting. A KFF poll reported that a quarter of adults under 30 use AI chatbots monthly for health information, though most adults lack confidence in accuracy. That skepticism matters-appeals that sound convincing can still fail if the AI misstates clinical facts.
- Practical effect: More appeals, better formatted, submitted faster-and more scrutiny of plan language and medical necessity criteria.
- Operational risk: If your denials rely on opaque logic or outdated criteria, expect higher overturn rates and member friction.
The numbers insurers care about
- 41% of providers report claims are denied more than 10% of the time, up from 30% three years ago (Experian report).
- ACA marketplace plans denied nearly 1 in 5 in-network claims in 2023, up from 17% in 2021 (KFF data). Source
Public scrutiny is growing. Media coverage and lawsuits have highlighted algorithm-driven denials and the need for clearer clinical rationale. Industry groups argue AI improves efficiency and speed-fair points-but speed without explainability is a liability.
Regulatory heat is rising
States including Arizona, Maryland, Nebraska, and Texas have barred AI from being the sole decisionmaker in prior authorization or medical necessity denials. More than a dozen states moved on AI-in-health rules this year, with bipartisan momentum.
Expect requirements for transparency, human-in-the-loop decisions, and bias minimization. As one health law expert put it, it's not "a satisfying outcome to just have two robots argue back and forth over whether a patient should access care."
One legislator-physician summed it up: AI may be an "active player" in care decisions now. That demands guardrails and individualized assessments backed by a human decision-maker.
For a policy snapshot, see state-level tracking and analysis from firms following health AI regulation. Overview
Where AI helps-and where it breaks
- Works well: Reading plan contracts, crosswalking coverage terms to claims, flagging coding mismatches, summarizing clinical documentation against criteria.
- Breaks down: Nuanced clinical judgment, rare conditions, incomplete records, unclear benefit language, and anything requiring context that isn't in the data.
As one startup leader put it, AI connects the dots inside the contract. But complex cases still need humans to review. Patients also need human help-AI can draft a solid letter and still get a key clinical detail wrong.
Operational playbook for insurance teams
- Make humans the final decision-maker. Document the checkpoint and the authority. Ensure appeals show human review occurred.
- Expose clear clinical rationale. For each denial, log the guideline, evidence, and decision path. If an AI assisted, say how.
- Tighten prior auth criteria. Keep policies current, specific, and public. Vague language is an appeals magnet.
- Build an audit trail. Version your models, prompts, criteria, and data sources. Track overturns by reason code and guideline.
- Score fairness. Monitor for differential impact by age, gender, race/ethnicity, disability status, and line of business. Validate with holdout tests.
- Provide an appeals kit. Offer a structured pathway: required docs, timelines, sample letters, and contacts. It reduces noise and bad submissions.
- Stand up a "member explainability" function. Plain language summaries of decisions, criteria, and next steps-delivered in hours, not days.
- Clean your data exhaust. Standardize coding, reconcile provider data, and reduce free-text ambiguity. Bad inputs create bad denials.
- Calibrate model risk tiers. High-risk use cases (medical necessity, NICU, oncology) get stricter controls than low-risk ones (document triage).
- Train front-line teams. Teach staff to spot AI-drafted appeals and respond with targeted evidence and clear options.
Member experience: small fixes, big impact
- Proactive alerts: Notify members and providers earlier when prior auth is likely required, with a checklist of documentation.
- Faster clarifications: Offer 24-48 hour callbacks for denial explanations and how to strengthen an appeal.
- Billing hygiene: Reduce coding and network-status errors that spark avoidable appeals.
Real-world signal to watch
Patients report wins when tools uncover coding issues or misapplied policy terms. In one case, a coding error spotted by an external assistant led to approval after an initial denial. Expect more of these: AI is very good at catching contractual mismatches and process gaps.
Bottom line for insurers
AI on the consumer side is here. You can meet it with defensible decisions, transparent criteria, better data, and a human who owns the call. Or you can brace for higher overturns, more regulator attention, and member churn.
AI can speed reviews and reduce waste. But the standard has shifted: clear rationale, human oversight, and evidence of reduced bias are no longer nice-to-haves-they're table stakes.
Upskilling for insurance teams
If you're building internal literacy on AI governance, prompt quality, and automation for claims and prior auth, consider targeted training for insurance roles. Courses by job
Your membership also unlocks: