AI Isn't Neutral: How Generative Images Supercharge Racist Harassment - And What Builders Can Do
In 2022, minutes after a public post about the fall of Roe v. Wade, trolls flooded a Black nonbinary researcher's account. They attacked identity, expertise, and the legitimacy of years of work on digital misogynoir - the blend of anti-Black racism and sexism targeting Black nonbinary, agender, and gender-variant people online.
The clap-backs worked in the moment. But the hate kept resurfacing. Then a new multiplier arrived: generative AI.
The shift from words to images
Text-based harassment spreads fast. AI-generated images spread faster and hit harder. Anyone can create a "believable" picture in seconds, then push it across networks before truth can catch up.
Consider civil rights attorney Nekima Levy Armstrong. After her arrest at a church protest, an AI-altered image portraying her as frightened, crying, and with noticeably darkened skin circulated widely and, according to reports, was tied to official communications. Regardless of who pressed send, the doctored photo reached millions and framed a Black woman as weak and hysterical - a stereotype with a long history and real consequences.
Why this harm lands harder
Racist tropes don't live in a vacuum. They shape reactions at work, in medicine, and in academia. Dark-skinned Black people, in particular, face harsher snap judgments rooted in "toughness," "thick skin," or "surprise" at their language skills - biases that raise maternal mortality risk and undermine careers.
AI adds scale and speed to those old narratives. Models learn from us: our datasets, our prompts, and our blind spots. If we don't address the source, the system just keeps copying the harm.
What IT and dev teams can do now
- Data and evaluation
- Curate training data with consent and representation; filter hateful content and stereotype-rich sources.
- Run disaggregated tests by skin tone, gender identity, and context; publish model cards and bias findings.
- Set up bias bounties and external red teaming focused on anti-Black and gendered harms.
- Safety by design
- Gate image-edit features that darken skin, add tears, or sexualize bodies; block prompts targeting protected groups with demeaning edits.
- Layer input/output filters for harassment, dehumanization, and doxxing; log and review high-risk generations.
- Authenticity and provenance
- Adopt C2PA content credentials and visible labels; verify and preserve provenance on upload and re-share.
- Detect known watermarks and cryptographic signatures; add friction to sharing unverified images.
- Product friction and reporting
- Rate limit virality of unverified image chains; cap forwards; prompt users to review before resharing.
- Make reporting one-tap; route misogynoir cases to specialized moderators with clear SLAs.
- Ops and incident response
- Create runbooks for deepfake incidents: takedown, correction notice, outreach to targets, and public timeline updates.
- Publish transparency reports on harassment metrics, fixes shipped, and open issues.
For deeper technical and policy training, see Generative AI and LLM.
Educate early and often
If young people grow up thinking AI images are "just pictures," we've already lost. Teach the history of racist tropes, how they resurface in data, and how to spot manipulative edits.
- Build AI/media literacy into curricula with real cases, including harms to Black nonbinary, agender, and gender-variant people.
- Use classroom labs to test bias in prompts, critique outputs, and design safer alternatives.
- Establish clear reporting pathways when students encounter AI-amplified harassment.
Resources to get started: AI for Education.
Digital hygiene for everyone
- Pause before you post: reverse image search, check for content credentials, and compare with trusted coverage.
- Look for tells: inconsistent lighting, warped text, unnatural hands, off reflections, or metadata gaps.
- Add context, or don't share: if you can't verify, don't amplify. If you must, label uncertainty.
- Use platform controls: block, filter, report. Document harassment and escalate through official channels.
Accessibility and nuance
AI can help people with disabilities brainstorm, organize, and work through brain fog. That's real and valuable. It can also harm Black users when models replicate racist patterns. Treat both truths seriously in product and policy decisions.
Accountability beyond platforms
- Vet vendors and models for civil rights impacts; require clear model cards and audit rights in contracts.
- Partner with independent experts like the Distributed AI Research Institute and the Algorithmic Justice League.
- Hold institutions - including governments - to the same standards you set for your own product.
Call it what it is
The racist use of AI to target Black nonbinary, agender, and gender-variant people is real. Images move hearts and policies - fast. Educate, build guardrails, add friction, and enforce consequences.
Call a thing a thing. Then fix it with code, process, and the will to be accountable.
Your membership also unlocks: