Dr Bot: Can ChatGPT be trusted with your patients' health?
Three years after launch, OpenAI says 40 million people ask ChatGPT health questions every day. Now there's a new health feature in Australia that can connect with medical records and wellness apps to personalize responses.
That's a big promise. The real question for clinicians is simple: where is this useful, where is it risky, and what do you need in place before anyone connects patient data to a chatbot?
What this new wave of "health AI" actually offers
The pitch is personalization. If a system can read medication lists, lab results, and device data, it can produce responses that feel context-aware. Think summaries, reminders, explanations written in plain language, and lighter admin.
Helpful? Yes-if it's a copilot, not a decider. These tools are communication and workflow aids. They are not a substitute for clinical judgment.
The trust gap: accuracy, bias, and drift
Large models still hallucinate. They sound confident when they're wrong, and small prompt changes can flip answers. Without source citations and clear reasoning, verification takes time you don't have.
Bias is another fault line. If training data under-represents certain populations, output quality will vary. Safety performance can also shift with model updates, which means validation is not a one-and-done task.
Privacy and security: the non-negotiables
Connecting medical records raises high-stakes questions. Where does data go, who can access it, how long is it stored, and can it be used for model training or product improvement? You need firm answers in writing.
De-identification isn't a free pass. Re-identification risk grows with more data sources. Enforce least privilege access, audit everything, and confirm data residency and encryption standards end-to-end.
Is it regulated?
Experts in Australia have already raised concerns about the lack of clear regulation around new chatbot health features. If a system influences diagnosis or treatment, it may fall under Software as a Medical Device rules and require formal oversight.
Your baseline: comply with the Australian Privacy Principles and align with established ethics guidance for AI in health. Start here:
Practical guardrails for health services
- Define approved use cases: education, discharge summaries, coding support, lifestyle advice. Prohibit autonomous diagnosis or prescribing.
- Keep a human in the loop for any patient-facing or clinical output. Require clinician sign-off.
- Lock down data flows: no PHI to external models without a compliant agreement, data processing addendum, and documented retention limits.
- Disable model training on your data by default. Confirm in contracts.
- Stand up a clinical safety case: risk assessment, failure modes, mitigations, and rollback plan.
- Validate with gold-standard test sets across diverse populations. Track precision, recall, and harmful error rates.
- Enable audit logs for prompts, outputs, and user actions. Review regularly.
- Provide staff training on prompt hygiene, verification, and privacy.
- Obtain informed patient consent where data is used. Offer a clear opt-out.
- Monitor for drift and revalidate after model updates.
Implementation checklist
- Data: Source-of-truth systems, access controls, data minimization, retention policy.
- Security: Encryption at rest/in transit, secrets management, pen testing, incident response.
- Clinical: Indications/contraindications, escalation paths, documented limits of use.
- Legal/Compliance: APPs alignment, SaMD assessment, consent language, vendor liability.
- Operations: Service levels, monitoring, support, change management, user training.
How clinicians can use ChatGPT safely today
- Use it to translate clinical language for patient education. Verify facts before sending.
- Ask for uncertainties and alternatives. Good prompts: "List points of disagreement in guidelines" or "Cite sources."
- Never paste identifiable data into a non-compliant tool. Use institution-approved systems only.
- Cross-check against primary sources and local guidelines. If it can't cite, don't trust it.
- Treat it like a junior assistant: helpful for drafts and ideas, never the final authority.
Bottom line
ChatGPT can help with patient communication, paperwork, and clinical thinking aids-but only inside strong guardrails. Connect it to medical records only after governance, validation, and consent are in place.
Trust comes from process, not promises. Build the process first.
Upskill your team
If you're setting standards and training for safe clinical use of AI, here's a curated starting point for role-based learning: AI courses by job.
Your membership also unlocks: