Akron hospitals adopt AI tools as ethicists warn of bias, overreliance risks

Akron-area hospitals are using AI to read radiology scans and identify strokes faster, but ethicists warn the technology is poorly regulated and may carry hidden risks. Experts cite bias in training data and physician over-reliance as key concerns.

Categorized in: AI News Healthcare
Published on: Mar 16, 2026
Akron hospitals adopt AI tools as ethicists warn of bias, overreliance risks

Hospitals Deploy AI, But Ethicists Warn Against Hasty Implementation

Akron-area hospitals are adopting artificial intelligence algorithms to interpret radiology reports, summarize medical findings, and identify stroke patients faster in emergency rooms. Yet as AI becomes more common in clinical practice, healthcare leaders and legal experts say the technology remains inadequately regulated and carries significant risks.

Christopher Congeni, a partner at the Amundsen Davis law firm's Cleveland office, said hospitals and physician groups must address bias, transparency, and the fundamental question of whether AI serves as a tool or a replacement for clinical judgment.

"Health care is very, very regulated, and that presents challenges because we're still trying to figure out how to regulate AI," Congeni said.

Training Data Can Amplify Existing Biases

Naomi Scheinerman, an assistant professor of bioethics at Ohio State University, said the data used to train AI models often reflects existing gaps in medical knowledge across different populations.

"We don't have the perfect image of knowledge in society of conditions and how they affect different populations and groups," she said. "We have disproportionate representation in the data of dominant, majoritarian groups."

Congeni said an AI algorithm could inadvertently amplify these biases, leading to discriminatory outcomes in patient care. Steve Worrell, CEO of Riverain Technologies in Miamisburg-whose algorithm is used by University Hospitals and the Cleveland Clinic-said his company prioritizes diversity when acquiring training data.

"It's really important when you train these systems that you have adequate representation of different patient populations," Worrell said. "Generally speaking, with these algorithms, the more data you have, the better."

Devora Shapiro, an associate professor of medical ethics at Ohio University, said inadequate vetting could harm patients. "As an ethicist, one might be concerned, are we harming patients potentially without a clear understanding of that risk-benefit profile?" she said.

Physicians May Over-Rely on AI Recommendations

Shapiro raised concerns that physicians could become dependent on AI systems and lose critical thinking skills over time. "There is a question of whether the use of artificial intelligence in practice over the long-term makes individuals, both in medicine, potentially, and in other areas, other professions, a little bit less quick with their critical thinking skills, with their precision and their attention," she said.

Dr. Po-Hao Chen, vice chair for artificial intelligence in the Diagnostics Institute at the Cleveland Clinic, said AI never makes a diagnosis independently. A physician oversees the process and makes the final decision.

Careful Rollout Reduces Risk

Shapiro said hospitals should implement AI deliberately and methodically. "I am concerned that we are not being as careful as we ought to be in the integration and implementation of artificial intelligence tools in hospitals and in medical practice," she said.

Dr. Leonardo Kayat Bittencourt, vice chair of innovation at University Hospitals, described a structured approach. The hospital uses what the industry calls "shadow mode"-running the AI tool in the background for select users over weeks or months to monitor performance.

Experts then reconvene regularly to assess results before official implementation. Once satisfied, the hospital rolls out the tool to clinical production alongside staff education.

"We have it very, very deeply in our mission and in our activities to continuously educate people," Bittencourt said.

Regulation Remains Incomplete

Congeni said inconsistent regulatory approaches across healthcare organizations pose a problem. Clear compliance plans that define where AI begins and ends in clinical workflows are essential.

"The concern that people potentially have these days is that the use of AI has infiltrated more of [medical] practice than we ought to have," Shapiro said.

Scheinerman said the technology's potential remains promising if properly developed and deployed. "If we could get this technology to be super well-trained and effective, it could help speed up, be more accurate and faster and save lives," she said.

Learn more about AI for Healthcare and AI Data Analysis to understand how these systems work in practice.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)