Auxiliary, Not Autopilot: NECA's Principles for Generative AI in Healthcare

NECA issues practical principles for using generative AI in care, keeping humans in charge. The guidance stresses safety, accountability, clear labeling, and real-world oversight.

Categorized in: AI News Healthcare
Published on: Jan 08, 2026
Auxiliary, Not Autopilot: NECA's Principles for Generative AI in Healthcare

NECA Issues Practical Principles for Using Generative AI in Healthcare

The National Evidence-based Healthcare Collaborating Agency (NECA) released "Principles for Appropriate Use of Generative AI in Healthcare," making one point crystal clear: AI belongs in care as an auxiliary tool, not a decision-maker.

The goal is simple-focus on using AI well rather than just building it well. With large language and multimodal models now common in clinics, NECA's guidance centers on safety, accountability, and real-world usability.

Why this matters for healthcare teams

Adoption is growing across clinical workflows, but so are risks-patient safety, privacy, overconfidence in machine outputs, and unclear liability. NECA notes that policy and regulation alone can't cover the speed and variety of real use.

The agency framed these principles as a social compact shared by everyone in the ecosystem, not a technical checklist. That keeps the focus on behavior, communication, and patient protection.

Core idea: AI is assistive, humans lead

Medical AI should support clinical judgment and documentation, not replace it. Human oversight (human-in-the-loop) is required at every critical point-input, interpretation, and action.

What NECA's principles ask of each stakeholder

For developers and service providers

  • Prioritize patient safety and transparency across the product lifecycle.
  • Improve fairness and explainability; make outputs interpretable for clinicians and patients.
  • Embed human oversight by design; keep clinicians in control of decisions.
  • Correct errors quickly, disclose incidents, and document fixes.
  • Label AI-generated outputs clearly within clinical systems.
  • Increase accessibility: plain-language modes and automatic slot-filling to support information-vulnerable groups.

For healthcare professionals

  • Use AI as a supplementary reference, and own the final decision.
  • Rely on evidence-based validation before integrating tools into care.
  • Explain AI's role to patients, obtain informed consent where appropriate, and document it.
  • Build error-prevention routines and learn from near-misses and incidents.
  • Keep improving digital competencies and AI literacy through ongoing training.

For citizens (patients and caregivers)

  • Treat AI as an aid for self-protection and decisions, not a replacement for clinical care.
  • Use tools safely, protect personal information, and verify sensitive advice.
  • Maintain a critical mindset-ask where outputs come from and how they were generated.

Everyday safety guidance

  • Do not rely on AI for emergencies or high-risk situations; seek immediate medical care.
  • Stop using any tool that gives unusual, biased, or uncomfortable responses and report it.

How to put this into practice in your organization

  • Set up an AI governance group with clinical, data, legal, and patient representation.
  • Require model documentation: intended use, data sources, known limitations, monitoring plans, and escalation paths.
  • Add clear labels in the EHR for AI-assisted notes, orders, and summaries.
  • Standardize consent language for AI-supported services and publish it internally.
  • Track AI-related incidents and near-misses; review trends monthly and feed changes back to vendors.
  • Invest in staff training on prompt use, bias recognition, and verification routines. For structured upskilling by role, see AI courses by job.

Context: how NECA built the principles

NECA convened its 2025 Roundtable Conference on Medical AI, bringing together clinicians, researchers, industry, legal experts, and the public across two sessions. The output is a shared framework meant to guide practice now and inform future policy.

"Medical AI presents a significant opportunity to enhance public health, but it also carries the risk of undermining trust in healthcare if misused," said NECA Director Lee Jae-tae. "These principles are meaningful as a public benchmark that can be practically referenced in medical settings, going beyond mere regulation."

Related guidance


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide