OpenAI buys Torch to build out ChatGPT Health
OpenAI has acquired Torch, a young healthcare startup that unifies lab results, medications and visit recordings across data sources, to accelerate ChatGPT Health. The Information reported the deal at around $100 million.
The Torch team - Ilya Abyzov, Eugene Huang, James Hamlin and Ryan Oman - will join OpenAI. As Torch put it, "We started Torch to build a medical memory for AI, unifying scattered records into a context engine that helps you see the full picture, connect the dots, and make sure nothing important gets lost in the noise again."
What Torch brings
Torch aggregates a patient's medical information from hospitals, labs, wearables and consumer testing companies into one place. In plain terms: a data layer that turns fragmented inputs into a single, queryable context for AI.
What ChatGPT Health is
Last week, OpenAI launched ChatGPT Health, connecting its chatbot to users' medical records and wellness apps to deliver more personalized answers to medical questions. According to OpenAI, over 800 million regular users engage with ChatGPT, and roughly 1 in 4 asks healthcare-related questions weekly; more than 40 million bring health questions daily.
Why it matters for healthcare teams
- Data integration becomes the moat: Torch's "medical memory" suggests deeper EHR, lab and wearable connectivity. Expect heavier use of FHIR APIs and tighter EHR partnerships.
- Consent and compliance move to the front: If patient data flows through ChatGPT Health, teams need clear consent, BAAs where required, and audit trails. See the HIPAA Privacy Rule overview from HHS here.
- Provenance and accuracy will be scrutinized: If the model reasons across multiple sources, clinicians will want data lineage, timestamps, and source links for verification.
- Clinical safety guardrails are mandatory: Define when the system can summarize, suggest, or draft - and when it must defer to a clinician. Human-in-the-loop remains the standard.
- Workflow first, features second: Prioritize use cases with clear ROI: ambient documentation, patient messaging triage, benefits verification and prior auth prep, and post-visit instructions.
- Security posture matters: Clarify where PHI is stored, encryption at rest/in transit, data retention, red-teaming for prompt injection, and incident response.
Questions health leaders should ask now
- What data does ChatGPT Health read, and what (if anything) can it write back to the EHR?
- How is consent captured, refreshed and revoked? Are there patient-facing controls?
- What are the model's boundaries for clinical advice, and how are disclaimers presented?
- How are errors handled - from hallucinations to mis-labeled records? What's the escalation path?
- Where is PHI processed and stored? Are BAAs in place? What are the default retention windows?
- Can we trace outputs to sources with timestamps and confidence signals?
- How are model updates validated for safety and bias before rollout?
What to expect next
OpenAI now has a dedicated team and a data unification layer - a strong signal that provider, payer and digital health pilots will expand. Timelines and integration details aren't public yet, but expect early use cases around information retrieval, summarization and patient guidance with strict oversight.
If you're planning for 2026, start with a tight governance plan, a small set of high-value workflows, and a clear measurement framework. For teams building internal skills, you can explore practical AI training paths by job role here.
This is a developing story and will be updated.
Your membership also unlocks: