Berkeley Law researcher finds AI headshot apps remove hijab in tests across 25 platforms

A Berkeley Law researcher tested 25+ AI headshot apps and found every one removed her hijab. The findings expose a gap in anti-discrimination law: who's liable when an algorithm erases religious identity?

Categorized in: AI News Legal
Published on: Mar 31, 2026
Berkeley Law researcher finds AI headshot apps remove hijab in tests across 25 platforms

Berkeley Law Researcher Finds AI Headshot Apps Systematically Remove Hijabs

Mahwish Moazzam tested more than 25 AI headshot generators and found that every single one removed her hijab from the generated images. The discovery raises urgent questions about how anti-discrimination law applies when algorithms-not people-make decisions that erase visible religious identity.

Moazzam, a J.S.D. candidate at Berkeley Law, uploaded selfies to widely used AI image tools expecting professional portraits. Instead, the software stripped away her hijab without explanation or user control.

"Some of the apps even prompted users to decide whether they wanted to keep accessories such as glasses," Moazzam said. "None asked whether the hijab should remain."

The pattern held consistent over roughly a year of testing. Only two apps produced mixed results, showing distorted or incomplete head coverings. The rest removed the hijab entirely.

Why This Matters to Legal Systems

For lawyers and policymakers, the issue cuts deeper than aesthetics. The hijab is a visible expression of religious identity. When algorithms systematically remove it, the question becomes: does this constitute religious discrimination?

Traditional anti-discrimination law was built for identifiable human decision-makers. Courts can examine intent, motivation, and patterns of behavior. Algorithms create a different problem.

"Traditional anti-discrimination law was designed for identifiable human decision-makers," Moazzam said. "Now we must ask how those laws apply when the decision-maker is an algorithm, where intent cannot easily be established and outcomes are difficult to explain."

The research highlights three specific legal challenges. First, AI systems can distort identity in subtle ways that existing law may not recognize. Second, they reproduce discrimination at scale through biased training data. Third, they create accountability gaps-when a hijab disappears, who bears legal responsibility?

A developer may work in California. The app distributes globally through app stores. The person harmed may live elsewhere. This cross-border structure makes legal accountability complex.

A Question for Emerging Legal Frameworks

Moazzam's work sits at the intersection of AI for Legal Professionals and human rights law. Her broader research examines how legal systems allocate responsibility when new technologies create new forms of harm.

The headshot app discovery emerged from casual observation. She tried one app, noticed the hijab was gone, tried another, then another. After testing dozens of applications, the pattern became clear.

"It started very casually," she said. "I kept seeing advertisements on social media for AI headshot applications. Out of curiosity, I tried one. The images looked very professional, but my hijab had disappeared."

What makes this research replicable is that the apps are publicly available. Other researchers and journalists can test the same tools to see how AI systems handle visible markers of identity across different demographics and religious expressions.

The Broader Research Context

Moazzam came to Berkeley Law in 2019 to pursue an LL.M. in international law after teaching constitutional law and human rights in Pakistan. She stayed to pursue doctoral research examining how legal systems translate human rights commitments into actual protection.

Her work on AI focuses on a fundamental accountability problem: when multiple actors are involved-dataset creators, developers, platform distributors, end users-who is legally responsible for algorithmic harm?

Berkeley Law faculty supervising the research say it demonstrates how rapidly AI can raise new questions about identity and dignity. Professor Kathryn Abrams, Moazzam's J.S.D. supervisor, noted that when Moazzam encounters a surprising fact-whether a legal case or an unexpected AI output-she follows the evidence until she can form substantive questions.

The research also raises a straightforward policy question: as AI image tools spread across social media and professional platforms, are legal systems prepared to recognize and address these harms?

"Every day we see new examples of AI harm," Moazzam said. "The real question is whether our legal systems are ready to recognize those harms and respond to them."


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)