Finding the Signal Through the Noise in Health Care AI: Practical Takeaways From DCI Network's Conference
Health systems don't need more hype about AI-they need results they can defend to clinicians, patients, and regulators. A recent DCI Network conference in Boston focused on that exact shift: from promise to practice.
Yuri Quintana, Ph.D.-conference co-chair, Chief of the Division of Clinical Informatics at Beth Israel Deaconess Medical Center, Assistant Professor of Medicine at Harvard Medical School, and Senior Scientist at the Homewood Research Institute-shared insights that set a clear direction for ethical, effective AI in care delivery.
1) Patient-centric co-design isn't optional
Patients shouldn't just be "in the loop." They should be co-creators from first prototype to postmarket monitoring. This is how you avoid silent failure modes and build tools people trust.
- Recruit patient advisors early; compensate them; give them a vote on release criteria.
- Co-define success metrics that reflect patient priorities (burden reduction, access, clarity of explanations).
- Require plain-language model summaries and on-screen explanations for any patient-facing output.
- Stand up ongoing feedback loops: flagged outputs, easy in-product reporting, and scheduled listening sessions.
2) Start with the "mundane"-that's where impact compounds
The biggest wins today are behind the scenes. Administrative pain points are measurable, lower risk, and free up clinician time for care.
- High-yield candidates: prior authorization prep, eligibility checks, denial triage, referral routing, scheduling optimization, ambient documentation, coding suggestions.
- Track simple KPIs: minutes saved per encounter, turnaround time, error rate, denial reversal rate, staff satisfaction, queue length.
- Close the loop: daily exception queues, human verification for edge cases, and clear rollback paths.
3) Transparency and ethics across the entire life cycle
Trust is earned through visibility. Stakeholders want to see how models are trained, what data they use, how they're monitored, and how issues are handled.
- Publish model cards: intended use, exclusions, training data sources, performance by subgroup, known limitations.
- Document data lineage and consent. Encrypt at rest and in transit; minimize PHI exposure; log access.
- Implement bias and safety checks pre-deployment and continuously in production.
- Maintain audit trails, alert thresholds, human-in-the-loop escalation, and deactivation criteria.
4) Technology moves fast-wisdom must keep pace
The group's message was clear: safety, equity, and efficacy aren't slogans-they're non-negotiable guardrails. Governance has to be active, not performative.
- Create a cross-functional AI governance board (clinical, legal, security, operations, patient advocates).
- Require clinical validation with representative populations before scale-up.
- Monitor drift, rare harms, and subgroup performance; review incidents and publish fixes.
- Communicate clearly: what the tool can/can't do, how decisions are made, and how patients can opt out.
What you can implement in the next 90 days
- Pick two "boring but valuable" workflows. Define baseline metrics, implement a pilot, and report weekly.
- Stand up a lightweight model governance checklist: intended use, risk class, supervision level, rollback plan, ownership.
- Form a patient advisory mini-panel and run usability tests on any AI touching patient communications or clinical notes.
Why this matters now
AI will be judged on outcomes and accountability. Teams that start with co-design, target administrative friction, and build transparent oversight will see faster adoption and fewer surprises.
This isn't about chasing the flashiest model. It's about building dependable systems that make care safer, more equitable, and easier to deliver.
Resources and further reading
- Event Recap: Finding the Signal Through the Noise in Health Care AI at DCI Network's AI Conference (JMIR, 2025)
- WHO: Ethics and governance of artificial intelligence for health
Upskilling your team
If you're building internal capability around safe, effective AI use, explore role-based learning paths for clinicians, informaticians, and operations leaders.
Your membership also unlocks: