Cleveland Clinic Taps Generative AI to Break Mid-Revenue Cycle Bottlenecks
Cleveland Clinic is partnering with AKASA to apply generative AI across mid-revenue cycle work. The goal is simple: reduce cost and delay in coding, authorizations, charge capture, and denial handling - without blowing up existing workflows.
For leaders, this is a signal. AI is moving from pilot to practical utility in healthcare operations. If your teams are still buried under manual status checks, appeals letters, and payer portal clicks, you have margin on the table.
Why this move matters for management
- Cash flow: Faster mid-cycle throughput can shorten AR timelines and smooth revenue predictability.
- Cost-to-collect: Automating repetitive tasks trims overtime, vendor FTEs, and rework.
- Quality: AI-assisted coding and charge checks reduce errors that trigger denials.
- Capacity: Free staff from low-value clicks so they focus on exceptions and higher-yield accounts.
- Experience: Fewer delays and cleaner bills cut patient frustration and back-and-forth.
Where generative AI fits in the mid-revenue cycle
- Clinical documentation and Coding: Draft physician queries, surface missing documentation, and suggest code edits for human review.
- Charge capture: Flag charge variances and potential under-coding based on patterns in notes and orders.
- Prior authorization: Summarize medical necessity, monitor status, and draft payer-specific submissions or appeal letters.
- Claim status follow-up: Read payer responses, extract next steps, and route to the right queue.
- Denial management: Classify root causes, propose fixes, and assemble appeal packets with citations.
The practical edge comes from pairing generative models with deterministic rules and EDI integrations. Let AI read messy text; let rules enforce policy; let humans handle nuance.
Guardrails you'll want in place
- PHI security: Confirm encryption, data segregation, and HIPAA compliance. Reference: HHS HIPAA.
- Auditability: Keep versioned prompts, responses, and decision logs for each claim touch.
- Quality controls: Human-in-the-loop for sensitive actions, with clear thresholds for auto-pass vs. escalate.
- Model oversight: Monitor error rates by payer, specialty, and note type; retrain on drift; lock down prompt changes.
- Policy currency: Map payer policy updates to automation rules to avoid outdated responses.
- Exception pathways: Make it easy for staff to override AI, explain why, and feed that back into improvement.
A 90-day rollout playbook
- Weeks 1-2: Pick one high-volume, rule-heavy workflow (e.g., prior auth for a single service line). Baseline current metrics and costs.
- Weeks 3-4: Vendor due diligence - security review, sample prompts, sandbox tests with de-identified data, and clear SLAs.
- Weeks 5-6: Configure integrations, routing, and approval thresholds. Define "no-go" scenarios.
- Weeks 7-8: Pilot with 10-20% of volume. A/B against current process. Daily triage on misses.
- Weeks 9-10: Train staff on new queues, exceptions, and escalation rules. Capture frontline feedback.
- Weeks 11-12: Expand to 50-100% if targets are met. Lock KPIs into weekly ops reviews.
KPIs to track weekly
- Days in DNFB and % of accounts stuck >3 days
- Clean claim rate and initial denial rate
- Time to authorization approval and auth-related denial share
- AR days by payer and service line
- Cost-to-collect and rework rate
- Average touches per claim and staff time per unit
Operating tips from early adopters
- Start narrow, win fast: Pick one payer and one specialty. Volume plus consistency beats chasing edge cases.
- Codify "source of truth": Keep payer rules and medical necessity criteria in one place the AI reads from.
- Design for exceptions: The value is in the 70-90% of routine work. Route the rest cleanly to experts.
- Close the loop: Every correction should update prompts, rules, or training data - weekly, not quarterly.
- Make it visible: Put KPIs on a shared dashboard and review misses with the vendor in a cadence meeting.
What this signals
Large systems are shifting from manual mid-cycle work to AI-assisted workflows. The leaders will be the ones who pair clear governance with relentless iteration - and who measure value weekly, not yearly.
If your mid-cycle is a choke point, this is the window to test, learn, and scale.
Further learning
- AI courses by job role for leaders building ops and analytics capability.
Your membership also unlocks: