AI in healthcare: useful, as long as we keep the human in charge
A patient recently shared that their therapist asked for consent to use an AI notetaker during sessions. Now, each visit starts with a quick confirmation, and the clinician stays off the keyboard and fully present. That's the promise many teams hope for: better focus, cleaner notes, less admin overhead.
But the question that matters is simpler than the hype: does it help patients without increasing risk? For that, we need clarity, boundaries, and a feedback loop that catches errors fast.
AI scribes and documentation: value with verification
Fewer clicks and more eye contact are real wins. Still, a flawless transcript doesn't equal a correct chart. A wrong medication or dose in the note can cascade into bad orders, denials, and harm.
Set expectations early and make accuracy a team sport. Here's a practical baseline:
- Consent and control: Get explicit permission, remind at each visit, and offer an easy opt-out.
- Key fact checks: Before closing, verbally confirm meds, allergies, and diagnoses. Make it part of the script.
- Immediate reconciliation: Route AI notes to a human reviewer. Finalize only after verification.
- Secure handling: Ensure BAAs, encryption, and strict access. No shadow IT.
- Track error rates: Sample notes weekly and report accuracy to the team. Improve or pause.
If your organization is testing transcription or note automation, a quick overview of Speech-To-Text tools can help you choose fit-for-purpose options and set expectations for accuracy and latency.
Medication safety starts with source of truth
AI can cross-reference med lists across EHR, pharmacy fills, and prior notes faster than a busy clinic. That speed helps only if someone owns reconciliation and closes the loop. Assign a role, define a checklist, and time-box the task so it actually gets done.
- One list, one owner: Name the person responsible for final med rec each encounter.
- Cross-check: Compare AI-captured meds with current EHR list, recent refills, and patient-reported changes.
- Flag mismatches: Use structured flags for dose/form differences and require resolution.
- Audit: Monitor discrepancies found vs. resolved; escalate if trends worsen.
If you're upskilling the team that manages charts and EHR data, this resource may help: AI Learning Path for Medical Records Clerks.
Can AI reduce time to diagnosis in rare disease and PH?
For pulmonary hypertension and other rare conditions, delayed diagnosis is common. Pattern-finding across echo reports, CT scans, vitals, and prior referrals could shorten that delay. The key is clinical oversight: treat AI suggestions like a consult, not a conclusion.
Pair any diagnostic aid with a clear escalation path, second reads for high-risk calls, and routine review of false positives/negatives. For broader context on diagnostic safety, see AHRQ's diagnostic safety resources.
CTEPH surgery: setting honest expectations about recovery
Newer tools are being explored to estimate recovery time after surgery for chronic thromboembolic pulmonary hypertension (CTEPH). Used well, these predictions help set expectations, plan rehab, and align staffing and follow-up. Patients benefit when the team shares the range, explains uncertainty, and uses the forecast to plan-not to limit care.
Build the workflow around the patient: prehab steps, caregiver prep, discharge criteria, and scheduled touchpoints mapped to the predicted course.
Draw the line with governance, not gut feel
- Transparency: Know what data the tool uses, where it stores outputs, and how updates are managed.
- Risk assessment: Triage by impact: documentation aids vs. tools that touch diagnosis, meds, or triage.
- Human-in-the-loop: Require clinician sign-off on any recommendation that changes care.
- Bias monitoring: Compare performance across demographics and clinical subgroups; retrain or retire if needed.
- Audit trail: Log inputs, outputs, overrides, and outcomes to support QA and learning.
- Privacy and safety: Align with regulatory guidance; review vendor claims against your policies and the FDA's perspective on AI/ML-enabled tools.
Your minimum viable rollout plan
- Start narrow: Pick one workflow (e.g., scribing for follow-ups) and one service line.
- Define success: Time saved per visit, documentation accuracy, denial rates, patient satisfaction.
- Red-team the tool: Feed edge cases and known tricky notes; document failure modes.
- Train the team: Scripts for consent, fact checks, and closing loops on med rec and orders.
- Communicate with patients: Plain-language handout on what the tool does and doesn't do.
- Review and iterate: 30/60/90-day checkpoints; expand only if signal beats baseline.
The balance that keeps care human
Use AI where it gives clinicians time back, makes data cleaner, or helps spot patterns you'd otherwise miss. Keep a human on the hook for decisions that matter. That balance-value plus verification-is what protects patients and makes the work better.
How are you using AI in your clinic or hospital? What's worked, what hasn't, and what would you try next?
Note: This article is for information only and is not a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified health professional for questions about a medical condition.
Your membership also unlocks: