AI Literacy Training at Children's Hospital Colorado for Validated, Ethical Use

AI literacy is patient safety: train staff to validate outputs, protect privacy, and require human sign-off. Educators turn policy into training, checklists, pilots, and badges.

Categorized in: AI News Education
Published on: Sep 26, 2025
AI Literacy Training at Children's Hospital Colorado for Validated, Ethical Use

AI Literacy as Patient Safety: What Educators Should Build Next

Kerri Webster, RN, vice president and chief analytics officer at Children's Hospital Colorado, says the hospital prioritizes training staff in AI literacy so outputs are validated and the technology is used ethically. That stance is practical and urgent. If AI touches clinical work, education becomes risk management.

Why this matters for educators

AI skills are no longer optional for clinicians, faculty, and support staff. Education teams sit at the center: translating policy into practice, giving people a common language, and installing guardrails through training, assessment, and certification.

Core pillars of an AI literacy program

  • Foundations: How modern models work, limits, bias sources, privacy basics (HIPAA), and security hygiene.
  • Validation workflow: Baselines, test sets, output comparison, error categories, and escalation paths.
  • Ethics and governance: Consent, fairness, explainability, documentation, and audit trails.
  • Human-in-the-loop: Clear decision rights and clinical sign-off before anything reaches a patient chart.
  • Data quality: Lineage, freshness, representativeness, and drift checks.
  • Prompt and query craft: Reproducible prompts, versioning, and red-teaming for failure discovery.
  • Role-based depth: Different tracks for clinicians, educators, analysts, and administrators.
  • Assessment: Scenario-based evaluation and applied checklists tied to real workflows.

Practical steps for education teams

  • Set outcomes: what each role should do safely with AI (and what they must never do).
  • Map content to policy: align lessons with institutional AI use rules and clinical governance.
  • Build cases: convert top use cases into simulations with realistic data and edge cases.
  • Codify checks: teach a standard review flow before any AI output is accepted.
  • Measure: pre/post assessments, error rates, escalation volume, time saved, and learner confidence.
  • Certify: issue internal micro-credentials that expire and require refreshers as tools change.

Rapid validation checklist for staff

  • Define the decision: automation, recommendation, or draft-only?
  • Trace inputs: source, permissions, de-identification, and bias risks.
  • Compare: AI output vs. baseline method; note deltas and failure modes.
  • Stress test: adversarial prompts, outliers, and look-alike cases.
  • Safety screen: privacy, fairness, and clinical appropriateness.
  • Approval: required human review and sign-off documented.
  • Log: prompt, version, output, decision, and reviewer.

Governance that supports teaching and practice

Pair training with a standing AI review group, model and tool inventories, and standard operating procedures. Use recognized frameworks to structure risk and oversight, such as the NIST AI Risk Management Framework (NIST AI RMF) and WHO guidance on ethics for AI in health (WHO).

90-day rollout blueprint

  • Days 0-30: Inventory AI use, draft policy, pick 2-3 safe pilot use cases, build starter modules.
  • Days 31-60: Train first cohorts, run tabletop drills, finalize validation checklists, launch logging.
  • Days 61-90: Expand to additional roles, issue micro-credentials, publish dashboards, refine from feedback.

Adapting for K-12 and higher education

Emphasize digital citizenship, acceptable use, and source verification. Use project-based work: data bias labs, citation practice with AI assist, and policy debate. For higher ed and clinical programs, add case reviews, de-identification practice, and interdisciplinary evaluation teams.

Recommended resources and next steps

Bottom line

The message is clear: AI literacy is patient safety and workforce development. Follow a repeatable validation process, teach ethics as a habit, and certify what good looks like. That is how educators turn AI from risk into reliable practice.