Before AI Treats Patients in Ghana, Prove It Works: Dr. Mercy Asiedu's Call for Equity and Transparency

Use AI to help patients and clinicians, but assess first and scale later-never deploy what you can't verify. Validate locally, keep humans in the loop, and monitor bias and drift.

Categorized in: AI News Healthcare
Published on: Nov 28, 2025
Before AI Treats Patients in Ghana, Prove It Works: Dr. Mercy Asiedu's Call for Equity and Transparency

Responsible AI in Healthcare: Benefits, Risks, and What to Do Now

AI is moving fast, but healthcare can't afford shortcuts. At the 13th RP Baffour Memorial Lecture Series, U.S.-based Google Research scientist Dr. Mercy Nyamewaa Asiedu urged a measured approach: assess first, scale later-especially with generative AI and emerging superintelligent systems.

Her message was clear: use AI to help clinicians and patients, but don't deploy what you can't verify. The risk isn't abstract; it's clinical outcomes and patient safety.

The risks clinicians should care about

  • Inaccuracy and hallucinations: Generative models can produce confident but wrong outputs. In a clinical context, that's dangerous. "If it's been used in health areas, it must be carefully assessed because inaccurate results could endanger patients."
  • Bias and local performance: If training data doesn't reflect your population, results can skew care. "Bias and local performance need to be studied before AI is deployed."
  • Black-box behavior: Opaque systems limit accountability. "Many of these models still operate as a black box," which complicates oversight, incident review, and clinician trust.
  • Premature adoption: AGI and superintelligence are still hypotheses under active research. Explore them, but don't rush them into care pathways.

What to put in place before adoption

  • Algorithm Impact Assessment (AIA): As Council Chair Akyamfour Asafo Boakye Agyemang-Bonsu advised, push for pre-deployment reviews that confirm model training data aligns with local data and clinical context.
  • Local validation: Test against your patient mix, care standards, and devices. Track performance by subgroup (age, sex, ethnicity, comorbidities) and setting (primary care, ED, inpatient).
  • Human oversight: Keep a clinician-in-the-loop for high-risk decisions. No autonomous actions without a clear escalation path.
  • Transparency and traceability: Require model cards, data lineage, versioning, and decision logs. If you can't audit it, you can't own the risk.
  • Monitoring and incident handling: Define metrics, thresholds, drift checks, and a clinical safety case. Set up a clear incident reporting and rollback plan.
  • Consent and communication: Be explicit with patients when AI is used and how decisions are made.

For regulatory context and stronger guardrails, see the FDA's resources on AI/ML-enabled medical devices here and the WHO's guidance on ethics and governance of AI for health here.

Collaboration matters

The Council Chair emphasized interdisciplinary work to build local models that fit local needs. Bring together clinicians, data scientists, informaticians, ethicists, and patient representatives early. That's how you catch blind spots before they reach the bedside.

Practical steps for the next 90 days

  • Inventory AI use (and intent): List pilots, vendor tools, and shadow projects. Classify by clinical risk.
  • Adopt a lightweight AIA checklist: Data sources, local fit, bias testing plan, human oversight, monitoring, and rollback.
  • Run a focused local validation: One workflow, one metric that matters, one subgroup analysis. Publish the results internally.
  • Stand up model monitoring: Decide on alert thresholds for drift and error spikes. Assign response owners.
  • Train your teams: Give clinicians simple guidelines on where AI helps, where it fails, and how to escalate concerns.

Dr. Asiedu encouraged researchers and practitioners to explore AGI and superintelligence thoughtfully-and to bring findings back to their communities. Curiosity is welcome. Clinical rigor is non-negotiable.

If your organization needs structured upskilling on evaluating and governing AI tools by role, explore our course paths here.

AI can reduce errors and extend capacity-but only with guardrails. Assess carefully, validate locally, monitor continuously, and keep clinicians in control.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
🎉 Black Friday Deal! Get 86% OFF - Limited Time Only!
Claim Deal →