AEquity Tool Targets Bias in Healthcare AI to Improve Fairness and Accuracy
Mount Sinai’s AEquity tool detects and reduces bias in healthcare datasets, improving fairness and accuracy in AI-driven decisions. It supports audits and better patient outcomes.

AEquity Enhances Training for Healthcare Machine Learning Algorithms
Artificial intelligence is increasingly integrated into healthcare, supporting tasks from disease diagnosis to financial and capacity planning. The success of these AI tools depends heavily on the quality and representativeness of the data used to train them. Recognizing this, researchers at Mount Sinai's Icahn School of Medicine developed AEquity, a tool that detects and reduces bias in datasets for machine learning models, improving both accuracy and fairness in AI-driven healthcare decisions.
Why Addressing Bias Matters
Bias in healthcare datasets can lead to uneven representation of demographic groups and skewed clinical presentations across populations. When AI models train on such data, they risk reinforcing inaccuracies, resulting in misdiagnoses or other unintended consequences.
Published in the Journal of Medical Internet Research, the study shows that AEquity identifies both known and previously unrecognized biases in various health data types, including images, patient records, and public health surveys. The tool assesses not only input data but also model outputs like risk scores and predicted diagnoses.
Mount Sinai researchers emphasize AEquity’s adaptability across different machine learning models and dataset sizes, making it a practical resource for healthcare organizations and AI developers. As Dr. Faris Gulamali explains, the goal is to help developers and health systems detect bias early and take steps to mitigate it, ensuring AI tools serve all patient groups effectively.
Practical Implications for Healthcare AI
- AEquity can be integrated during algorithm development to assess dataset fairness before deployment.
- It supports audits that help regulators and researchers verify model equity and accuracy.
- By addressing bias at the dataset level, healthcare providers can improve patient outcomes and build trust in AI technologies.
The research, funded by the National Center for Advancing Translational Sciences and the National Institutes of Health, involved a multidisciplinary team including clinical and AI experts.
Mount Sinai’s Ongoing Commitment to AI in Health
Mount Sinai has a history of advancing AI in healthcare. In 2024, it launched the Center for AI and Human Health, building on efforts such as AI-enabled pediatric care initiatives from earlier in the year. Projects currently underway include AI solutions for sleep disorder detection, closing care gaps, and reducing AI hallucinations.
Insights from Leadership
Dr. Girish N. Nadkarni, Chief AI Officer at Mount Sinai Health System, highlights that while tools like AEquity are crucial, they are part of a broader need for improved data collection and interpretation practices. “The foundation matters, and it starts with the data,” he says.
Dr. David L. Reich, Chief Clinical Officer at Mount Sinai, adds that correcting bias at the dataset level addresses issues before they affect patient care. This approach helps build community trust and ensures AI-driven innovations benefit all patients, not just those best represented in data.
For healthcare professionals interested in expanding their understanding of AI and its applications, exploring specialized training can be valuable. Resources like Complete AI Training’s latest courses offer practical knowledge to support better AI integration in healthcare settings.