Generative AI Is Becoming Essential for Coding Complex Conditions
A typical patient record can include 50,000 words spread across 59 clinical documents. Coding one inpatient case starts to feel like reading a novel and pulling out every relevant detail by hand. Teams have tried to scale with headcount, but accuracy and throughput still fall short.
The impact shows up in the numbers. Claim denials have climbed from 9.8% in 2019 to 12.7% today. Cost to collect has moved from 2.7% to 3.7%. For large systems, that gap represents millions of dollars and serious strain on staff.
What leading teams are seeing
In a discussion at the AHIMA Annual Conference, leaders from AKASA, Cleveland Clinic, and Duke University Health System shared how they're applying generative AI to coding. The focus: complex inpatient cases where subtle documentation drives severity, CC/MCC capture, and appropriate reimbursement.
- LLMs "unlock the clinical narrative," surfacing insights from unstructured notes, consults, and imaging reports that traditional tools miss.
- Models adapt quickly to local documentation patterns and improve with feedback, tightening accuracy over time.
- Hospitals are using AI to assist coders, not replace them-especially on high-complexity cases where the stakes are higher.
Why the clinical narrative matters
Cleaner claims start with a full story. Teams are finding "nuanced details" that support medical necessity, clarify acuity, and reflect the care actually delivered. The goal is simple: fewer preventable denials, better first-pass yield, and documentation that stands up to audit.
- Surface overlooked comorbidities and complications tied to SOI/ROM
- Strengthen DRG assignment with evidence from the chart
- Reduce coder rework and back-and-forth with providers
- Create appeal-ready notes faster for denial management
How to put this into practice
- Start with complex inpatient cohorts. Target DRGs with frequent queries, high denial rates, or heavy rework.
- Use human-in-the-loop. Let AI draft code suggestions and rationale; coders validate, adjust, and provide feedback.
- Measure what matters. Track first-pass accuracy, DRG shifts, CC/MCC capture, denial rates, days in DNFB, and coder throughput.
- Tighten the feedback loop. Feed corrections back to the model so it learns your documentation and coding standards.
Guardrails you'll need
- Privacy and security. Ensure PHI protection, BAAs, and audit trails with your vendor or platform.
- Clinical and coding governance. Define acceptance thresholds, escalation paths, and quality review cadence.
- Change management. Train coders and CDI teams on reviewing AI outputs and giving consistent feedback.
Beyond coding: denial management
Both Cleveland Clinic and Duke University Health System are now applying AI to denial management. Practical use cases include triage, root-cause pattern spotting, and first drafts of appeal letters that cite precise clinical evidence from the chart. This is where staffing relief and measurable revenue impact often show up first.
The bottom line
The volume and complexity of inpatient documentation have outgrown manual-only workflows. Generative AI is proving useful where it helps coders tell the full clinical story, validate choices with chart evidence, and keep claims clean on the first pass. That combination improves accuracy, reduces denials, and frees teams to focus on the hardest cases.
Explore further
- Cleveland Clinic
- Duke University Health System
- AKASA
- AI Certification for Coding (Complete AI Training)
Your membership also unlocks: