FDA’s Artificial Intelligence Push Sparks Debate Over Safety, Trust, and Transparency in Health Care

The FDA plans to use AI to speed drug and device approvals while reducing animal testing. Experts urge transparency, accountability, and human oversight to ensure safety and fairness.

Categorized in: AI News Science and Research
Published on: Jun 14, 2025
FDA’s Artificial Intelligence Push Sparks Debate Over Safety, Trust, and Transparency in Health Care

FDA's Plans for Artificial Intelligence in Health Care

The U.S. Food and Drug Administration (FDA) is exploring the use of artificial intelligence (AI) to speed up decision-making in health-related fields. Dr. Marty Makary, FDA commissioner, and Dr. Vinay Prasad, director at the Center for Biologics Evaluation and Research, recently outlined this initiative in the Journal of the American Medical Association.

Goals of AI Implementation

The FDA has not finalized how AI will be applied but suggests potential uses including:

  • Accelerating drug and device approvals
  • Reducing animal testing
  • Monitoring and addressing concerning ingredients in food

However, concerns have emerged from various experts about the risks involved with integrating AI into regulatory processes.

Concerns About AI in Health Care

Experts warn that speeding up approvals may compromise safety and efficacy standards. Jessica Malaty Rivera, an infectious disease epidemiologist, points out the contradiction between accelerating processes and ensuring thorough safety checks.

Transparency is a major issue. Elisabeth Marnik, a scientist and science communicator, notes the lack of clear information on how AI will be used within FDA reviews. Without transparency, trust in the system risks erosion.

Legal and Ethical Implications

Legal experts emphasize that AI use must comply with existing laws ensuring accountability. Stacey B. Lee, a law and ethics professor, highlights the danger of “black-box” AI models that obscure decision-making processes. This opacity can undermine due process and patient safety.

Accountability remains unclear—if AI contributes to a faulty approval or oversight, responsibility could fall on software developers, FDA staff, or both. The current regulatory framework has not yet caught up with these challenges.

Concerns also extend to the potential misuse of AI in food ingredient evaluation. Some fear that misinformation and wellness marketing gimmicks could influence AI outputs, leading to unjustified scrutiny of safe ingredients.

Bias and Equity in AI

Bias in AI training data presents a significant risk. If trained on data that underrepresents certain groups, AI could perpetuate disparities in approvals or warnings. This could deepen existing inequities in health care.

Additionally, political factors such as banned terminology in research may limit AI’s ability to fairly assess applications, raising questions about how equity and unbiased review will be ensured.

Potential Benefits of AI

Despite these concerns, AI has promising applications when properly managed. For example, AI can analyze molecular structures in food safety assessments faster than human teams.

Experts agree AI can streamline data analysis and review large volumes of information, but caution against removing human oversight entirely. Proper validation and controls are critical before AI is fully integrated into federal health processes.

Ensuring Fairness and Accuracy

Guardrails must be established to guarantee AI functions reliably and fairly. According to Marnik, the FDA should treat AI implementation as a scientific process, validating that AI performs at least as well as human reviewers.

Lee stresses the importance of transparency, independent audits, human oversight, and clear legal responsibility to maintain public trust. With public confidence in health institutions already fragile, the FDA’s AI pilot requires careful management to avoid undermining trust further.

Some experts remain skeptical of the current leadership’s commitment to evidence-based decision-making, noting the risk that AI tools could be shaped by agendas not grounded in rigorous science.

Overall, AI holds potential to support FDA functions, but only if implemented with transparency, accountability, and a clear focus on patient safety and equity.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide