Gender Biases in Google's AI Compromise Healthcare, LSE Study Finds
Artificial intelligence is increasingly integrated into critical sectors like healthcare and social care. However, concerns grow about the full understanding of AI capabilities, biases, and their implications. A recent study from the London School of Economics (LSE) highlights significant gender bias in Google's Gemma AI model, particularly in social care assessments used by English councils.
Unequal Treatment in Social Care Assessments
The LSE research examined how large language models (LLMs) interpret identical case notes that differ only in the gender of the subject. The findings reveal that Gemma consistently describes men's health and social care needs with more serious and urgent language than women's, even when the conditions are identical.
For example, when processing information about an 84-year-old living alone with mobility issues, Gemma described the male case as having “a complex medical history, no care package and poor mobility.” In contrast, the female case was framed as “independent and able to maintain her personal care,” despite identical needs.
Consequences for Care Allocation
This bias could directly affect how care is allocated. Social workers increasingly rely on AI tools to manage heavy caseloads, and since care decisions are based on perceived need, biased AI outputs risk leading to women receiving less support.
The study found terms like “disabled,” “unable,” and “complex” were used more frequently in descriptions of men’s cases. Similarly, a male subject might be labeled “unable to access the community,” while the female equivalent is seen as “able to manage her daily activities,” despite identical circumstances.
Model Differences and Calls for Regulation
Not all AI models showed this bias. Meta’s Llama 3, for instance, did not display significant gender-based language differences when processing the same notes. This suggests that gender bias is not unavoidable in AI but depends on design and training.
The LSE stresses the need for ongoing testing and transparency in AI systems, especially those deployed in sensitive fields like healthcare and social services. They urge regulators to mandate bias measurement in AI tools used for long-term care to ensure fairness and prevent harm.
Tami Hoffman, Director of Public Policy at the Guardian, summarized it well: “Responsible AI can produce fantastic outcomes, but embedding old prejudices in our digital future is not a 'productivity gain.'”
Industry Response and Broader Concerns
Google acknowledged the study and said its teams would review the findings. They noted the research was based on the first-generation Gemma model, not the current version, and emphasized that Gemma is not intended for medical use. This highlights the need for clear guidelines on where and how AI should be applied, especially in healthcare for vulnerable populations.
This issue is part of a wider pattern. A separate US study found that 44% of 133 AI systems displayed clear gender bias, with 25% showing both gender and racial biases. Such biases risk reinforcing existing inequalities in healthcare.
Jen Fenner, Co-Founder and Managing Director at DefProc Engineering, points out the risks: “Gender health gaps already harm outcomes, from reduced access to services to misdiagnoses and the all-too-common experience of not being heard. Without transparency and rigorous bias testing, AI risks reinforcing the inequalities it should be helping to eliminate.”
What Healthcare Professionals Should Know
- AI tools in social care and healthcare may carry hidden gender biases affecting assessment outcomes.
- Bias can lead to unequal care allocation, disadvantaging women even when needs are identical.
- Not all AI models have the same bias levels; ongoing evaluation and transparency are essential.
- Healthcare organizations should demand clear evidence of bias testing before deploying AI tools.
For healthcare professionals looking to understand AI’s role and limitations better, exploring focused AI training can be valuable. Resources such as Complete AI Training’s healthcare-specific courses provide practical insights into AI applications and ethical considerations.
Ensuring fairness in AI-assisted healthcare means advocating for rigorous testing and regulation—and staying informed about the technology shaping patient care decisions.
Your membership also unlocks:
 
             
             
                            
                            
                           