AI's New Healthcare Push: What Providers, Payers, and Startups Need to Do Now
This month, Anthropic and OpenAI moved deeper into healthcare with new tools for both consumers and clinical teams. The timing makes sense. Patients are already using large language models for health questions at massive scale, and that behavior is shaping expectations inside clinics, call centers, and patient portals.
The upside is access. The risk is trust. Healthcare leaders now have to decide whether these tools become an asset inside the care continuum-or a parallel system that undermines it.
What Anthropic and OpenAI Just Launched
- OpenAI introduced two offerings:
- ChatGPT Health: a consumer health experience that combines a user's personal health information with its AI to help manage health and wellness.
- OpenAI for Healthcare: tools for providers to cut administrative work and support care planning.
- OpenAI also acquired medical records startup Torch in a deal reported at about $100 million.
- Anthropic unveiled a suite of Claude tools and new agent capabilities for prior authorization, billing, and clinical trial workflows, plus the ability for paid users to connect and query their own medical records for summaries, explanations, and visit prep.
Why This Was Inevitable
Consumer behavior forced the issue. Healthcare AI expert Saurabh Gombar notes that large language models now field a staggering volume of health questions-he estimates about 5% of LLM traffic is health-related and roughly 40 million unique health questions are asked daily. If chatbots are where patients start, tech companies are already in healthcare whether they say it out loud or not.
That shifts the clinic dynamic. Many patients now arrive convinced they need a certain test or treatment based on chatbot guidance. Providers didn't choose this change, but they have to respond to it.
Startup Fallout: Where Moats Hold-and Where They Don't
Kamal Singh at WestBridge Capital expects consumer wellness and nutrition apps to feel the most pressure. Generic, chat-based advice without deep specialization is easy for big LLMs to absorb given their reach and habit loops. Think Noom, Fay, Zoe-solid products, but harder to defend if the core value is broad advice.
He sees stronger footing in specialized clinical areas like chronic disease management, where advantage comes from longitudinal data, disease-specific protocols, and tighter integration with clinicians. Care coordination and care management also stand out, especially when AI augments human teams instead of replacing them.
AI-driven primary care sits in the middle: sophisticated enough to resist total commoditization, but exposed to big-platform gravity. Counsel Health is one early example blending AI with physicians for fast, personalized advice. Survival here will depend on outcomes, smart reimbursement models, and hybrid care that proves safer and cheaper-not just "more AI."
LLMs as Healthcare's Front Door
Gombar frames it bluntly: chatbots are becoming the first opinion; clinicians are becoming the second. They're easier to access, available 24/7, and free at the point of use. Add clinician shortages and coverage churn, and the shift accelerates.
For context on coverage loss during Medicaid unwinding, see Kaiser Family Foundation's reporting here.
The Risk Profile: Accuracy, Accountability, and Context
Wrong answers in healthcare cause harm. Traditional providers operate under malpractice rules, audit trails, and clear liability. Chatbots often lean on disclaimers. In reality, many patients take their output as medical advice.
Gombar argues for stronger responsibility signals from AI vendors: transparent error rates, clear markers when evidence is weak or uncertain, and language that conveys uncertainty rather than confident speculation.
Privacy is table stakes. Anthropic says it does not train on user health data, requires explicit consent for each integration, and lets users disconnect at any time. OpenAI points to short data retention, stronger encryption for health conversations, isolation of ChatGPT Health chats, and no use of those chats for model training. For enterprise tools, data stays within the organization's secure workspace.
Provider Playbook: Integrate Without Fragmenting Care
Kevin Erdal at Nordic highlights a real operational risk: shadow workflows. Clinicians may start relying on AI-generated summaries or patient-submitted outputs without standards for validation or documentation. That erodes continuity and adds hidden liability.
The right move isn't to block consumer AI; it's to absorb it responsibly. Set up pathways so patient-facing AI augments care instead of creating a parallel record.
- Standards and governance
- Define when clinician review is required for AI-generated inputs (e.g., med changes, triage advice, diagnostic suggestions).
- Capture provenance in the EHR: what came from an AI, which model, and when.
- Create a review workflow for patient-generated AI content in portals and messages.
- Clinical safety
- Mandate uncertainty cues: AI outputs should flag confidence level and evidence strength.
- Establish escalation rules for ambiguous or high-risk recommendations.
- Audit for hallucinations and track safety events tied to AI-assisted decisions.
- Data and privacy
- Use isolated workspaces and strict access controls for AI-assisted documentation.
- Prohibit model training on PHI unless governed, consented, and contractually protected.
- Review vendor BAAs and model/data lineage; verify deletion and encryption claims.
- Practical pilots
- Start with low-risk, high-friction tasks: prior auth packets, benefits checks, visit prep summaries, discharge instruction drafting.
- Measure outcomes: clinician time saved, turnaround times, denial rates, readmissions, patient comprehension scores.
- Train clinicians on reviewing AI outputs-what to accept, what to question, what to reject.
Startup Playbook: Compete Where Distribution Isn't Enough
- Own hard data: longitudinal, disease-specific, and rights-cleared. Build with explicit consent and clear value for the patient.
- Hybrid care beats chat-only: combine AI with clinicians, protocols, and outcomes guarantees.
- Be EHR-native: integrate with scheduling, orders, and documentation. Deliver fewer clicks, fewer denials, faster turnaround.
- Show outcomes: agree on metrics with customers-cost per episode, time to therapy, adherence, CAHPS, STAR/HEDIS impacts.
- Privacy posture as a feature: publish retention, encryption, access logs, and model training boundaries.
- Reimbursement strategy: align with CPT, value-based contracts, or employer benefits to avoid being a pure cash-pay app.
What This Adds Up To
LLMs are already the first touch for health questions. The open question is control and accountability: who sets the standard of care when an AI gives the first answer and a clinician gives the second?
The organizations that win will integrate AI into existing workflows, document responsibility, and protect continuity between the chat and the chart. Access to information goes up. Trust has to keep pace.
Upskill Your Teams on LLMs
If your clinicians and ops leaders need a fast, practical foundation on ChatGPT and Claude-prompting, safety, and workflow integration-these resources can help:
Your membership also unlocks: