GE HealthCare's AI chief on medtech's iPhone moment, safety by design, and what regulators want

GE HealthCare's AI lead says build safety in from day one and prove it with real-world evidence. Be transparent, keep clinicians in the loop, and plan for monitoring and updates.

Published on: Oct 23, 2025
GE HealthCare's AI chief on medtech's iPhone moment, safety by design, and what regulators want

GE HealthCare's chief AI officer on building safe, useful medtech AI

Parminder Bhatia, Chief AI Officer at GE HealthCare, shared clear, no-nonsense guidance for teams bringing AI into medical devices. If you're building, validating, or deploying AI in care settings, his advice lines up with what regulators want and what clinicians actually trust.

What regulators expect right now

Regulators start with patient safety. Bhatia's message is simple: bake safety in from day one. You can't bolt it on later.

  • Responsible AI principles: Safety, validity, transparency, explainability, and fairness. Treat these like requirements, not slogans.
  • Layered safeguards: Plan tests, checks, and failsafes across data, model, workflow, and UI. Include edge cases and failure modes.
  • Evidence over hype: Prove clinical relevance and performance in the intended use, with the intended users, in realistic conditions.

Where oversight is heading

Expect more emphasis on transparency, explainability, lifecycle monitoring, and continuous validation in real-world use - especially with generative and agentic AI. Regulators are already collaborating with industry on how to evaluate these systems.

  • Model transparency: Document intended use, data sources, known limitations, and performance across subpopulations.
  • Change control: Establish update policies, versioning, and rollback plans before you ship.
  • Post-market monitoring: Drift detection, trigger criteria for re-validation, and feedback loops with clinicians.
  • Human factors: Make failure states clear, surface uncertainty, and keep clinicians in the loop.

How to build trust with clinicians

Bhatia points to two pillars: build safe, effective solutions and educate care teams. People trust what they understand and can control.

  • Make the intended use and guardrails obvious in the workflow.
  • Offer plain-language explanations and on-screen rationale when possible.
  • Fail safely: degrade gracefully, show uncertainty, and default to clinician judgment.
  • Provide training, quick-start guides, and easy escalation paths.
  • Keep audit logs and outcomes data to support QA and continuous improvement.

Operating principles for product teams

  • Start with the problem, not the tech. Work backward from measurable clinical or operational pain: burnout, throughput, access to expertise, avoidable procedures.
  • Use the right tool for the job. Foundation models and agentic AI can change how teams use data, but the use case dictates the method.
  • Design for access. GE HealthCare's Vscan Air with Caption AI enables a nurse in a rural clinic to capture diagnostic-quality ultrasound, reducing dependence on scarce specialists.
  • Reduce unnecessary interventions. The Vscan Air CL's bladder volume algorithm helps cut avoidable catheterizations by providing fast, reliable measurements with clear visualization.

Lessons that translate to outcomes

Innovation takes time, persistence, and iteration. The goal isn't the flashiest algorithm - it's a clinically meaningful solution you can validate, deploy, and support at scale.

  • Co-develop with clinical partners from day one. Tight feedback loops beat lab-only progress.
  • Invest in prospective and multi-site validation. Real settings surface real issues.
  • Ship in increments. Start with assistive features, then advance as evidence grows.
  • Build cross-functional teams: clinical, regulatory, data, product, quality, and security.
  • Share evidence. Publications, registries, and post-market data build confidence.

A practical build checklist

  • Problem framing: Define the clinical decision, user, setting, and success metrics.
  • Data strategy: Curate representative data with clear provenance, consent, and governance.
  • Fairness checks: Measure subgroup performance and set remediation triggers.
  • Model development: Prefer simpler, explainable methods when they meet the bar. Document trade-offs.
  • Human factors: Prototype in the actual workflow. Reduce clicks and cognitive load.
  • Verification & validation: Bench tests, simulation, then real-world pilots with predefined endpoints.
  • Regulatory plan: Map claims, risk class, applicable standards, and change control approach.
  • Deployment: Versioning, rollback, edge cases, and privacy/security controls.
  • Monitoring: Performance dashboards, drift alerts, incident response, and periodic re-validation.
  • Education: Role-specific training for clinicians, biomed, IT, and support teams.

If you're building medtech AI, do this next

  • Pick one use case where you can show measurable value in 90-180 days.
  • Define your safety case and monitoring plan before model training starts.
  • Line up two clinical partners for iterative testing and honest feedback.
  • Write the plain-language "model facts" sheet you'll share with users. If it sounds fuzzy, the product probably is.

Further learning

Bhatia's bottom line: be intentional, be transparent, and be patient. Build for clinical reality, prove it with evidence, and keep improving once it's in the field.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide