Don't let AI "pull the doctor out of the visit" - especially for low-income patients
AI is moving fast into clinical workflows. In some southern California clinics serving unhoused and low-income patients, medical assistants use AI systems to generate diagnoses and treatment plans that a physician reviews later. The stated goal: "pull the doctor out of the visit."
That direction is risky. It creates a two-tier system where people with money see clinicians, and people without are processed by software.
Why this trend took off
Hospitals are crowded. Clinicians are burned out. Health systems want throughput. Those pressures are even heavier in under-resourced settings, where patients face a higher burden of chronic disease and are more likely to be uninsured.
Against that backdrop, AI looks like relief. Surveys suggest many physicians now use AI for charting or clinical support. Startups have raised large rounds to build "ChatGPT for doctors." Lawmakers are even weighing whether AI can legally prescribe.
"Something is better than nothing," right? Not here.
Evidence says AI tools can miss the mark, and not randomly. Studies have found imaging algorithms under-diagnose Black and Latinx patients, women, and those on Medicaid. Another recent study reported higher false positives for Black patients in AI-supported breast cancer screening.
That pattern isn't surprising. These systems work on probabilities learned from data. If the data reflect inequities, the outputs can amplify them. For patients already facing barriers, that's not a minor bug - it deepens harm.
Consent and transparency are non-negotiable
In some pilots, patients are told an AI is "listening," but not that it generates diagnostic recommendations. That's not informed consent. It erodes trust and echoes ugly chapters of medical history.
Patients deserve clear, plain-language disclosures about what the AI does, what it sees, known risks, and how to opt out - without penalty or delay in care.
Coverage decisions are already being shaped by AI
AI doesn't just touch clinical notes and visit summaries. It influences who gets care at all. Reports estimate tens of millions of low-income Americans have critical life decisions shaped by algorithms - from Medicaid eligibility to disability benefits.
Ongoing lawsuits allege that an insurer's AI tool wrongly denied medically necessary care for Medicare Advantage members. Courts have allowed parts of these cases to proceed. However they end, the takeaway is clear: when opaque models gatekeep access, people with the least power bear the brunt.
What healthcare leaders can do now
- Keep clinicians in the room. Use AI to assist, not replace, the clinician-patient encounter - especially for diagnosis and treatment planning.
- Set strict use boundaries. Start with low-risk functions (e.g., scribing, visit summaries). Don't pilot "AI-first" diagnostic workflows on vulnerable populations.
- Require explicit consent. Plain-language disclosures about AI's role, data use, limitations, and a real opt-out path that doesn't delay care.
- Demand subgroup validation. Before deployment, test locally and publish performance by race, ethnicity, sex, age, language, disability status, and insurance type. No results, no rollout.
- Stand up clinical AI governance. Cross-functional committee (clinicians, patients, equity experts, legal, security) to vet use cases, approve models, monitor incidents, and pause use when harm signals appear.
- Write stronger vendor contracts. Require transparency on training data provenance, documented model limitations, ongoing monitoring, bias mitigation plans, audit access, and indemnification for algorithmic harm.
- Protect privacy by default. No ambient "always on" listening without consent. Minimize data capture, restrict secondary use, and harden PHI safeguards.
- Build real escalation paths. Easy, immediate access to a physician; second opinions; and exception workflows when AI conflicts with clinical judgment or patient report.
- Engage the community. Create paid advisory boards with unhoused and low-income patients to co-design pilots, consent language, success metrics, and stop criteria.
- Measure what matters. Track outcomes and utilization by subgroup; monitor false positives/negatives, delays in care, and complaint patterns; watch for performance drift.
- Equip the workforce. Train clinicians on AI strengths and failure modes, bias pitfalls, and safe use. Align incentives so speed never outranks accuracy and equity.
The line we shouldn't cross
Don't test "doctorless visits" on people who already face the steepest barriers to care. AI can help with documentation, recall, and triage - but the human clinician remains the standard of care for diagnosis and treatment, full stop.
If we center patient voice, insist on consent, and hold one standard for everyone, AI can be helpful. If we let cost and throughput drive the agenda, we'll widen a divide we're supposed to close.
Your membership also unlocks: