Anthropic and Google join OpenAI in healthcare AI as safety concerns loom

Anthropic and Google join OpenAI in healthcare with Claude for Healthcare and MedGemma 1.5. The focus now is real workflows, safety, and small pilots that actually prove value.

Categorized in: AI News Healthcare
Published on: Jan 20, 2026
Anthropic and Google join OpenAI in healthcare AI as safety concerns loom

Anthropic and Google follow OpenAI into healthcare AI

Two more major AI firms have entered healthcare. Anthropic introduced Claude for Healthcare, and Google released MedGemma 1.5. Both moves arrive weeks after OpenAI launched ChatGPT Health in the US.

The signal is clear: AI is moving from general assistants to clinical and consumer use cases. The question for healthcare leaders isn't "if," but "how" to deploy safely.

What's new

Anthropic: Claude for Healthcare. A toolkit for providers, payers, and consumers that connects to lab results and health records. It can summarise medical history, explain tests in plain language, surface patterns across fitness and health metrics, and draft questions for appointments.

Google: MedGemma 1.5. An update to its open medical model that adds interpretation of three-dimensional CT and MRI scans, plus whole-slide histopathology images. This pushes imaging support beyond text-only workflows.

OpenAI: ChatGPT Health (US-only). Can analyse medical records and health app data to provide personalised guidance. OpenAI states it supports-not replaces-clinical care, is not for diagnosis or treatment, and is meant to help with everyday questions and pattern tracking over time.

Availability and regulatory posture

ChatGPT Health is limited to the US for now. OpenAI says it's working through local regulations and additional compliance before a UK launch, including advance consultations with regulators in the UK and EU.

OpenAI also acquired Torch for more than $100 million. Torch specialises in connecting health data sources to answer common health questions-an infrastructure play that hints at deeper EHR and consumer data integration.

Safety and trust: still the constraint

Google removed some AI health summaries after an investigation found users were exposed to misleading information, including missing side effects and allergy warnings. It's a reminder that clinical accuracy, context, and safety controls are non-negotiable.

In November, the UK's Medicines and Healthcare products Regulatory Agency advised that AI chatbots should not replace advice from healthcare professionals. That guidance aligns with the cautious deployment most clinical leaders expect.

Euan McComiskie, health informatics lead at the Chartered Society of Physiotherapists, said: "These platforms are also not yet governed by any regulatory, strategic nor policy authority as is the case with our existing healthcare provider organisations. Until those issues are resolved, it is unlikely that generative AI platforms will entirely replace the human-led healthcare interactions. An AI-supported, human-led healthcare organisation can use multiple tools and platforms to operate efficiently, delivery high-quality healthcare whilst also enhancing the trusting and caring relationships that registered healthcare professionals have with the people we work with."

What this means for healthcare teams

  • Use cases are getting specific. From imaging reads to patient-friendly explanations, the tools now map to real workflows, not generic chat.
  • Data connection is the moat. Value comes from safely linking EHRs, labs, and wearables-then summarising and surfacing patterns clinicians can act on.
  • Guardrails will decide adoption. Human oversight, provenance, safety warnings, and audit trails must be built in from day one.
  • Pilots beat hype. Start small, measure impact (time saved, comprehension gains, error rates), and expand only with proven outcomes.

Practical steps to pilot responsibly

  • Define the job-to-be-done: Patient education post-visit, pre-op prep, imaging triage, or admin summarisation.
  • Choose the right model: Claude for patient-facing explanations and summaries; MedGemma 1.5 for imaging-heavy workflows; evaluate ChatGPT Health integrations where permitted.
  • Integrate with care pathways: Keep a clinician in the loop. Require sign-off for any output that touches diagnosis or treatment.
  • Set safety protocols: Mandatory disclaimers, side-effect prompts, allergy checks, and source citations where applicable.
  • Privacy first: Minimise data, log access, and ensure alignment with local regulations and organisational policies.
  • Measure outcomes: Track comprehension scores, appointment readiness, throughput, and quality markers. Stop what doesn't meet thresholds.

The bottom line

AI in healthcare is moving from promise to practice. The winners will pair high-utility models with tight governance, clinician oversight, and clear ROI. Start with contained, high-signal use cases and build trust step by step.

Upskill your team

If you're evaluating Claude-based workflows or need practical training for frontline teams, see our AI Certification for Claude for hands-on, healthcare-relevant skills.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide