ChatGPT Health launches in Australia: useful, risky, and in need of real guardrails
OpenAI has begun a limited rollout of ChatGPT Health in Australia. A waitlist is open, and early users can connect medical records and wellness apps to get answers that feel personal.
Experts see promise and risk in the same breath. The tool can clarify test results and support self-care, but it also blurs the line between general information and medical advice-especially when the responses sound confident.
The case that set off alarms
A 60-year-old man with no mental health history arrived at an emergency department convinced his neighbour was poisoning him. He was experiencing worsening hallucinations and tried to leave the hospital.
Doctors later learned he had been consuming sodium bromide-an industrial chemical he bought online-after an AI suggested it could replace table salt. Sodium bromide can build up in the body and cause bromism, with symptoms like hallucinations, stupor, and impaired coordination. This is exactly the kind of miss that has researchers calling for tighter oversight.
Why experts are concerned
- Not regulated as a medical device: There are no mandatory safety controls, risk reporting, post-market surveillance, or published test data.
- Opaque evaluation: OpenAI references HealthBench and physician input, but key methods and results are not in independent, peer-reviewed studies.
- Safety gaps: Some outputs omit side effects, contraindications, allergy warnings, and risk flags for supplements, diets, or practices.
- Blurry boundaries: Many users can't tell where general info ends and medical advice begins-especially when responses feel personalised.
What OpenAI says
OpenAI reports working with more than 200 physicians across 60 countries to improve the models. ChatGPT Health runs in a separate space with default privacy protections, encrypted data, and sharing limited to user consent or specific policy exceptions.
Why people will still use it
Rising out-of-pocket costs and long wait times push people to alternatives. ChatGPT Health could help with routine questions about chronic conditions and provide answers in multiple languages, which is valuable for people without strong English skills.
But there is a power imbalance. Large platforms are moving faster than governments, writing their own rules on privacy and transparency. The benefits often go to people with time, education, and access; the risks hit those without them. That's the equity problem policymakers must address now.
What governments and health leaders should set before wider rollout
- Regulatory position: Clarify if and when tools like this meet Australia's Software as a Medical Device criteria under the TGA framework.
- Independent testing: Require pre-deployment evaluations on Australian use cases; publish protocols, error rates, and safety findings.
- Harm reporting: Stand up a national incident channel for AI-in-health issues, with timelines for triage and public summaries.
- Safe defaults: Block high-risk categories (medication dosing, chemical substitutions, pregnancy/infant care, acute triage) and force escalation to clinicians.
- Clear labels: Prominent statements on limits, uncertainty, and who is responsible; show sources; show when data is missing.
- Data protections: Local storage standards, strict consent flows, retention limits, and regular third-party audits.
- Equity plan: Community-language support with clinical review, low-literacy modes, and targeted outreach for vulnerable groups.
- Ongoing oversight: Post-market surveillance, periodic safety audits, and public dashboards on incidents and fixes.
Practical guardrails for hospitals, clinics, and developers
- Use it as a drafting tool only. Require clinician review before any patient-facing advice.
- Block entire advice classes: chemical/food substitutions, supplement stacks, dosing, and high-risk home remedies.
- Force safety context: side effects, contraindications, allergy checks, interactions, and "what to avoid."
- Show uncertainty: require citations and an "evidence confidence" note; prefer TGA, NHMRC, or peer-reviewed sources.
- Escalation paths: offer "call a nurse/GP now" options and local health service directories for red-flag symptoms.
- Incident handling: log harms and near-misses; review within 72 hours; share learnings and fixes.
Simple guidance for consumers
- Do not ingest or apply anything based on AI alone. No chemicals, no supplements, no dose changes without a clinician.
- Use AI to prepare questions for appointments, not to replace professional care.
- Ask for sources. Favour official guidance (TGA, NHMRC) or peer-reviewed research.
- If advice sounds unusual, stop. Call your GP or the Poisons Information Centre (13 11 26 in Australia).
- Report any harm to your state health department and the platform so patterns are caught early.
The bottom line
ChatGPT Health could help people understand their health, but "helpful" without safety rails is a liability. Before a wider rollout, Australia needs clear rules, independent testing, transparent reporting, and real consumer education.
For ethics, safety, and governance guidance, see the WHO's recommendations on AI for health: Ethics and governance of AI for health.
If your team needs structured AI upskilling by role (healthcare, government, and beyond), explore role-based AI courses that focus on safety, policy, and practical workflows.
Your membership also unlocks: