Global Patterns, Opportunities, and Inequities in Artificial Intelligence and Deep Learning Research in Medicine
Since 2017, AI medical research publications have surged, led by the USA, China, and Europe. Yet, disparities persist, highlighting the need for global collaboration and equity.

Research on Artificial Intelligence, Machine and Deep Learning in Medicine: Global Characteristics, Readiness, and Equity
Abstract
Background
Artificial intelligence (AI) is set to fundamentally affect medical research and healthcare. While its benefits are clear, concerns about risks and ethical issues have sparked intense debate. There’s an immediate need for informed action grounded in scientific evidence.
Methods
This study examined global research trends in AI in medicine, analyzing publication patterns over time and across countries. It also assessed national readiness for AI by considering socioeconomic factors to pinpoint incentives and barriers in worldwide research efforts.
Results
Since 2017, there has been a sharp rise in AI-related medical publications. Leading contributors include the USA, China, the UK, Germany, and South Korea. Publication output correlates strongly with a country's economic strength and innovation capacity. However, citation data reveal a gap disadvantaging many Global South countries compared to North America and Europe. Interestingly, several emerging economies—such as Jordan, Pakistan, Egypt, Bangladesh, and Ethiopia—show promise for increased AI medical research in the future.
Conclusion
The findings highlight the need for enhanced global collaboration to promote equitable AI development in medicine. It’s crucial to avoid exacerbating regional and racial disparities by ensuring that AI systems from both economically strong and weaker countries adhere to fair and inclusive standards.
Background
The Center for Artificial Intelligence Safety recently issued a warning signed by key AI experts, including CEOs from OpenAI and Google DeepMind, emphasizing that mitigating AI risks should be a global priority on par with pandemics and nuclear threats. In response, the EU enacted the Artificial Intelligence Act in 2024, regulating AI products by risk level and banning those deemed unacceptable, such as manipulative AI systems.
AI’s rapid advancement has transformed many areas in medicine and healthcare. It leverages big healthcare data—like electronic medical records, imaging, and clinical notes—to automate processes, assist clinicians, and deepen insights into complex diseases. Applications range from robotic surgery and diagnostics to personalized therapies based on genetic profiles and pandemic response support.
Particularly, AI excels at identifying complex patterns and anomalies in imaging technologies such as X-rays, CT scans, and MRI, aiding earlier and more accurate diagnoses of cancer, neurological, and cardiac conditions. AI also accelerates drug development and plays a role in risk management and clinical decision-making.
Despite these benefits, AI introduces significant risks. Ethical concerns include transparency, accountability, potential bias, data privacy, and safety. AI systems often function as “black boxes,” making it hard for clinicians to verify decisions. There are fears of privacy breaches, social injustice, racial biases, dehumanization, and lack of informed consent.
AI’s effectiveness depends on large datasets, which raises issues around data security, hacking, and confidentiality—particularly for sensitive medical information. Misrepresentation of minorities or gender groups in training data can lead to biased outcomes, intentionally or unintentionally, undermining fairness.
Integrating AI into legacy healthcare systems is challenging due to inconsistent or low-quality data that can result in incorrect, potentially harmful decisions. Additionally, lethal autonomous weapons and job displacement pose broader societal risks linked to AI misuse.
Addressing these challenges requires global policies, ethical guidelines, and governance frameworks that promote equity, curb bias, and prevent misinformation. Scientific evidence must guide these efforts, involving all stakeholders—from researchers and clinicians to policymakers and ethicists. Particularly, weaker economies need inclusion in global AI governance to build balanced and representative medical AI databases.
Large multimodal AI models (LMMs) used across healthcare can propagate bias and misinformation, especially relating to race, ethnicity, gender, and age. While universities and academic centers often have ethical oversight, private companies may not, creating gaps in regulation. “Ethics dumping” occurs when high-income country researchers exploit less regulated environments in low- and middle-income countries, sometimes using data for commercial gain without adequate protections.
Research Ethics Committees (RECs) or Institutional Review Boards (IRBs) are vital for overseeing data sharing and ethical AI use but currently face challenges adapting to AI’s complexity, especially in the U.S. There is a pressing need to update governance instruments and clearly allocate responsibility among governments, international bodies, healthcare workers, and tech providers.
Despite these hurdles, AI offers an opportunity to improve medicine significantly. To support stakeholders, this study analyzed global AI medical research publications, uncovering patterns, gaps, and readiness levels. It also considered economic and innovation factors influencing AI development and discussed implications for global equity.
Methods
Methodological Platform
The study used the NewQIS (New Quality and Quantity Indices in Science) platform, a bibliometric tool combining publication data analysis with geographic visualization techniques. NewQIS maps data onto world maps that resize countries according to publication metrics, revealing global research patterns. The Web of Science Core Collection served as the primary data source for publication and citation information.
Search Strategy and Data Collection
Publications were identified using the definitions from the Singapore Computer Society: AI includes machine learning (ML), and ML includes deep learning (DL). The search targeted terms "artificial intelligence," "machine learning," and "deep learning" in article titles combined with medical research categories. To capture articles using the abbreviation “AI,” a second search combined "AI" in titles with "Artificial Intelligence" in abstracts or keywords, filtered to medical fields to avoid irrelevant results.
Data Processing
Metadata from identified articles were cleaned and standardized. Country and institution names were unified for accurate geographic analysis. Socioeconomic data such as population size, GDP, and innovation indices were integrated from UNESCO and other sources. The Governmental AI Readiness Index (GAIRI) provided insights into countries’ capacities to deploy AI effectively.
Analyses
The study normalized publication output by country population and GDP to enable fair comparisons. Correlation analyses assessed links between economic strength, innovation, and AI in medicine publications. Network analyses examined international and institutional collaborations. Title words and keywords were clustered to identify research focus areas across medical disciplines.
Methodological Limitations
As with all bibliometric studies, limitations exist. The Web of Science does not index all journals, potentially missing some research. Search terms balance inclusiveness with specificity, so some relevant works might be excluded or irrelevant ones included. Citation counts can be biased by errors or strategic citing. Metadata inaccuracies may affect data quality but not overall trends.
Results
The search retrieved 29,192 original research articles on AI in medicine.
Research Focus
Analysis of article titles identified nearly 20,000 frequently occurring words linked to specific medical topics and methods. Medical imaging and oncology/radiotherapy dominated, followed by public health, diagnostic prediction, and cardiology. Web of Science categories confirmed radiology and medical imaging as the most common fields, alongside neurology, internal medicine, oncology, and healthcare services. Cluster analysis revealed five main research groups centered on AI, machine learning, deep learning, and their medical applications.
Publication Trends
The first indexed AI medical article appeared in 1969. Publication numbers remained low until 2014, then began a noticeable increase. From 2017, annual publications surged from 435 to 9,218 by 2022. Citation trends peaked earlier, with a sharp rise starting in 2015 and reaching a high in 2020 before declining somewhat in 2022.
This rapid growth reflects heightened interest and investment in AI-driven medical research worldwide. However, citation patterns indicate disparities in influence and recognition, favoring developed countries. Emerging economies demonstrate positive trends, suggesting potential for growth with appropriate support and inclusion in global AI initiatives.
For those interested in expanding their AI knowledge and skills in healthcare, explore Complete AI Training's latest AI courses tailored for medical and scientific professionals.
Understanding these global dynamics is essential for shaping policies and collaborations that foster equitable and effective AI integration into medicine.