FDA’s AI Tools for Medical Device Reviews Face Technical Hurdles and Staff Concerns

The FDA’s new AI tool for medical device review faces issues like poor document handling and no real-time data access. Staff worry the rushed rollout risks safety and accuracy.

Categorized in: AI News Healthcare
Published on: Jun 04, 2025
FDA’s AI Tools for Medical Device Reviews Face Technical Hurdles and Staff Concerns

FDA’s AI Tool for Medical Device Review Faces Early Challenges

The Food and Drug Administration has introduced an AI tool aimed at accelerating the review and approval process for medical devices like pacemakers and insulin pumps. However, insiders reveal that the tool, still in beta, struggles with basic functions such as uploading documents and processing user queries. It currently lacks integration with the FDA’s internal systems and cannot access real-time online data, including recent studies or paywalled content.

What Is CDRH-GPT and Its Intended Role?

Known internally as CDRH-GPT, this AI is designed to assist staff at the FDA’s Center for Devices and Radiological Health (CDRH). This center oversees the safety of implanted devices and essential diagnostic tools like X-rays and CT scanners. The division has been impacted by recent Department of Health and Human Services layoffs, which reduced backend support crucial for timely approval decisions.

Reviewers at CDRH handle extensive data from animal studies and clinical trials, a process that can take months or even over a year. Ideally, AI could help streamline these reviews and reduce delays.

Concerns Over Rushed AI Implementation

Experts caution that the FDA’s rapid push to adopt AI may be ahead of what the technology can reliably deliver. Since April, FDA Commissioner Dr. Marty Makary has prioritized AI integration across the agency, setting a June 30 rollout deadline last month.

Despite claims from leadership that the process is ahead of schedule, those familiar with CDRH-GPT say the tool requires significant improvements. Staff are worried that the aggressive timeline doesn’t align with the tool’s current capabilities.

Arthur Caplan, head of medical ethics at NYU Langone Medical Center, stresses the importance of human oversight in device reviews. He notes that AI is not yet advanced enough to thoroughly question or interact with applicants and that lives depend on accurate assessments.

Elsa: Another AI Tool with Mixed Performance

Alongside CDRH-GPT, the FDA recently launched “Elsa,” an AI assistant now available to all agency employees for basic tasks like summarizing adverse event reports. Commissioner Makary highlighted a case where Elsa completed in six minutes what previously took two to three days.

However, internal sources suggest Elsa still struggles with core functions and sometimes generates inaccurate or incomplete summaries when tested against FDA-approved product data.

Staff Reactions and Ethical Concerns

While AI tools have potential to assist reviewers and scientists, many FDA staff view the rapid rollout with skepticism. Some fear AI could eventually replace human workers, especially given the agency’s layoffs, hiring freezes, and limited resources.

There are also ethical questions around conflicts of interest. Richard Painter, a law professor and former government ethics lawyer, wonders if safeguards exist to prevent FDA officials using AI from holding financial ties to companies that might benefit from these technologies. Ensuring independence is critical to maintaining the agency’s credibility.

Looking Ahead

The FDA’s AI initiatives are early-stage and require ongoing development and testing before they can fully support complex regulatory tasks. While the goal is to improve efficiency and reduce workloads, the technology still needs refinement to meet the high standards required for medical device safety and effectiveness.

Healthcare professionals interested in AI applications and training may find value in exploring dedicated AI courses tailored to healthcare and regulatory fields at Complete AI Training.