Hospitals Deploy AI Diagnostic Tools as Adoption Doubles Among Doctors
About 80% of doctors used artificial intelligence on the job last year, double the rate from three years ago, according to a survey by the American Medical Association. The global market for AI in healthcare reached $39 billion and is expected to grow significantly over the next decade.
At South Shore Hospital in Weymouth, Massachusetts, critical care physician Sam Ash uses OpenEvidence, a free AI platform, to help diagnose patients. When a 40-year-old woman arrived unconscious with multiple medications nearby, Ash used the tool to determine whether her Depakote level required dialysis - a calculation he said he couldn't recall precisely from memory.
The AI provided an immediate answer: she didn't need dialysis. It also prompted Ash to check her heart and breathing, and to ask whether she had taken other medications.
"It sort of prompts you to remember - don't get fixated on this idea of the Depakote - remember to check to make sure that she didn't take something else," Ash said. He emphasized that the tool functions as a reference resource with links to published medical studies, not as a replacement for clinical judgment.
Privacy Concerns and Adoption Barriers
Doctors cite confidentiality as a major concern. The AMA survey found that keeping patients' information secure ranked among the top worries about AI adoption.
Ash acknowledged the risk. South Shore's policy prohibits doctors from entering protected patient details into OpenEvidence, though he said mistakes could happen. He noted that OpenEvidence complies with federal medical privacy laws, and that without compliant AI tools, doctors resort to less secure platforms like Google or ChatGPT.
"People are going to use Google or people are going to use ChatGPT, and that we really don't want," Ash said.
Joe-Ann Fergus, executive director of the Massachusetts Nurses Association, raised concerns about bias in AI training data. She said patients from underrepresented groups - including Black, Latino, Indigenous, LGBTQ, and women patients - risk being misdiagnosed if AI systems were trained on non-representative populations.
Nurses also worry about accountability if AI recommendations prove faulty, and whether over-reliance on automated tools could erode clinical skills.
Where AI Performs Best: Medical Imaging
AI shows its strongest results in radiology. At South Shore Hospital, doctors use AI to enhance CT scan images while reducing radiation exposure by 30-50%.
Dr. Ori Preis, chair of the hospital's diagnostic imaging department, compared two abdominal scans - one processed with AI, one without. The AI version was significantly clearer.
"The AI - through machine learning - has learned how to identify noise versus actual image and clean up the image, so we can obtain the same quality images at lower radiation doses," Preis said.
Dr. David Bates, executive director of the Center for Patient Safety Research and Practice at Brigham and Women's Hospital, said accuracy often improves with AI in radiology because the process becomes more efficient.
Limitations and Hallucinations
AI diagnostic tools have documented weaknesses. Researchers at Mass General Brigham found that some popular AI chatbots - including those operated by OpenAI and Meta - prioritized being helpful over being accurate when answering medical questions. The tools sometimes could not identify illogical queries, resulting in false responses.
Ash said AI often fails on complex cases where few online references exist. "Even though it's 2026, sometimes we still put up our hands and say, 'I'm just not sure,'" he said.
A study at Brigham and Women's Hospital found that harmful diagnostic errors occurred in one out of every 14 hospitalized patients. Patient safety experts see AI as a tool to reduce such errors, though governance remains incomplete.
Hospital Systems Race to Scale AI
Massachusetts' largest hospital systems - Mass General Brigham, Beth Israel Lahey Health, and UMass Memorial Health - have established boards and offices to oversee AI implementation. Federal and state oversight remains limited.
UMass Memorial Health in Worcester is investing $100 million in AI and building its own data platform to train AI tools. President and CEO Dr. Eric Dickson expects the system to expand from about 60 AI tools to 300 within 18 months.
Dickson plans to offer AI training to 21,000 UMass workers. He frames AI as "augmented" intelligence that should operate with humans in the lead.
"AI isn't a choice anymore," Dickson said. "It's coming. We can't shut it off and so our job as humans is to figure out how to leverage it for the benefit of our patients."
Dr. Tejal Gandhi, chief safety and transformation officer with healthcare management company Press Ganey, said governance is currently "very up in the air" and that clearer guidance is needed on regulation and implementation standards.
Learn more about AI for Healthcare and Generative AI and LLM applications in medical settings.
Your membership also unlocks: