Singapore hospitals introduce AI-free shifts to prevent doctor deskilling
Singapore hospitals add AI-free periods to curb deskilling and keep clinicians sharp. Evidence and policies stress AI-on/off modes, limits, and rigorous performance tracking.

AI-free periods in Singapore healthcare: keeping clinicians sharp while scaling AI
Singapore's public healthcare clusters are taking a clear stance on AI deskilling: build guardrails now, not later. National University Health System (NUHS) has rolled out "AI-free" periods over the past year. The National Healthcare Group (NHG) is exploring similar measures.
Deskilling is the drop in core skills due to over-reliance on tools. In clinical practice, that can mean slower detection, poorer judgment, and reduced confidence when AI is unavailable.
What the evidence shows
A study published in August 2025 in The Lancet Gastroenterology & Hepatology found experienced endoscopists who used AI assistance became less adept without it. Pre-rollout, doctors detected growths in about 28 percent of colonoscopies; after rollout, detection fell to around 22 percent. The study involved four endoscopy centres in Poland using an AI polyp detection tool.
NUHS' Artificial Intelligence Office lead, Adjunct Professor Ngiam Kee Yuan, cautioned against overgeneralising from a single trial with limitations, including unclear handling of false positives. Still, he supports AI-free periods for skills and judgment-based use cases to keep clinicians' performance current.
The Lancet Gastroenterology & Hepatology offers broader research on endoscopy and AI for reference.
How Singapore's clusters are responding
NUHS has deployed AI in targeted domains while maintaining skill currency. Examples include AI-enabled polyp identification during endoscopy (early 2024), SerenityBot for breast cancer treatment recommendations (April 2025), and Champ, a chatbot supporting chronic disease self-tracking (since 2023). AI-free periods are used to ensure clinicians retain hands-on capability and judgment.
NHG is piloting mature, lower-risk solutions first. Trials include imaging, clinical note-taking, and models predicting fall risk and length of stay, plus back-office applications. NHG is also assessing usage limits and "AI-on/AI-off" modes-for example, AI for routine consultation summaries, manual note-taking for complex cases.
SingHealth emphasises thoughtful, measured adoption with rigorous evaluation through domain experts before implementation. AI is used to reduce routine administrative load and support clinical decisions, with attention to performance tracking and potential deskilling.
National healthtech agency Synapxe deploys AI with clinicians in the loop. Human decision-making remains the final authority on care pathways and interventions.
Education: prevent deskilling, "never-skilling," and "mis-skilling"
Medical schools are embedding AI literacy and ethics across training. Yong Loo Lin School of Medicine now requires a biomedical informatics minor for first-year students, covering AI and machine learning in clinical contexts.
LKCMedicine integrated digital health courses across its five-year curriculum, bringing ethicists, lawyers, and patient advocates into discussions. Training in procedures like colonoscopy starts with fundamentals before using AI.
Duke-NUS flags three risks: deskilling, "never-skilling" (skills never acquired), and "mis-skilling" (AI errors or biases reinforced). The school builds research skills and critical thinking and runs multidisciplinary workgroups to craft safe, effective uses of AI in education and care.
Practical steps for your hospital or clinic
- Define AI-free windows: schedule regular, protected periods where clinicians perform without AI in skill-intensive areas (e.g., endoscopy, ECG interpretation, triage).
- Use AI-on/AI-off by case complexity: routine, low-risk tasks can be AI-assisted; complex or ambiguous cases should default to manual workflows.
- Set usage limits: cap frequency or duration of AI use per clinician or service; trigger alerts when thresholds are exceeded.
- Benchmark performance with and without AI: track detection rates (e.g., polyp/adenoma detection), sensitivity/specificity, time-to-diagnosis, readmissions, and complication rates.
- Run no-AI drills: simulate downtime and require manual practice for procedures, documentation, and decision-making.
- Keep human-in-the-loop: mandate clinician oversight, second checks for high-stakes decisions, and clear escalation paths.
- Audit for bias and drift: review false positives/negatives, subgroup performance, and model stability over time.
- Invest in training: integrate AI literacy, data interpretation, and error recognition into CME, residencies, and fellowships.
- Communicate with patients: disclose AI use, clarify clinician responsibility, and document shared decisions.
- Start with mature, lower-risk tools: stabilise workflows before adding higher-stakes decision support.
What to measure
- Clinical accuracy: detection rates, diagnostic yield, adverse events, mortality and morbidity.
- Operational metrics: turnaround times, documentation quality, length of stay, throughput.
- Safety signals: escalation rates, overrides, incident reports, and near-misses.
- Clinician factors: confidence, cognitive load, and satisfaction; track during AI-on and AI-off periods.
- Equity and bias: performance across age, sex, ethnicity, comorbidities, and device types.
Implementation pitfalls to avoid
- Over-automation: removing manual practice entirely from training or complex cases.
- Policy drift: inconsistent AI-on/AI-off rules across departments.
- Ignoring feedback: not closing the loop on clinician-reported errors or false alarms.
- No contingency plans: poor readiness for AI outages or data pipeline failures.
- Unvalidated updates: silent model changes without re-evaluation and sign-off.
Resources
For governance and ethics guidance, see the WHO's recommendations: Ethics and governance of AI for health. For context on evidence in endoscopy, refer to The Lancet Gastroenterology & Hepatology.
If your teams need structured upskilling in AI literacy and workflows, explore curated options by role: Complete AI Training - Courses by Job.
Bottom line
AI can improve throughput and consistency, but clinical skills must remain intact. AI-free periods, usage limits, and rigorous tracking keep clinicians competent and ready-especially when judgment matters most.