AI "poverty porn" is creeping into aid campaigns. Here's how to stop it
AI-generated images of extreme poverty, children, and sexual violence survivors are now common on stock sites - and they're showing up in NGO campaigns. Professionals in global health and communications are calling this the new wave of "poverty porn."
The drivers are familiar: tight budgets, consent concerns, and speed. The result is predictable: photorealistic scenes that exaggerate stereotypes and strip people of dignity.
What's happening
Researchers tracking global health visuals say many AI images copy the same old tropes - empty plates, cracked earth, distressed children - but pushed further for clicks. Searches for "poverty" on stock platforms return staged, racialized scenes tagged as refugee camps, child hunger, and "medical care in African villages," often priced around the cost of a team lunch.
Some NGOs have already tested or used AI in campaigns about child marriage and sexual violence. One high-profile UN video mixing real and synthetic "re-enactments" was later taken down over integrity concerns.
Why this matters for PR, communications, IT, and development teams
This isn't just an optics issue. Synthetic suffering can mislead donors, harm the people you serve, and erode trust with partners and the public. It also risks reinforcing harmful racial stereotypes and retraumatizing survivors - without any of the safeguards required for real stories.
There's another layer: these images can seep back into training data and make the next generation of models even more biased.
The consent trap
Teams often reach for AI to "avoid consent problems." That's a false fix. Photorealistic depictions of minors, abuse, or deprivation - even if synthetic - still carry ethical, legal, and reputational risk. If an image looks real, the harm lands as if it were real.
Real examples to learn from
- Campaigns have circulated AI-generated images of a girl with a black eye, a pregnant teenager, and staged wedding scenes implying child marriage. Pushback followed.
- A UN video using AI-generated "re-enactments" of sexual violence was removed after concerns about mixing near-real synthetic content with real footage.
- Stock platforms host photorealistic content tagged by users; some leaders say demand drives supply, even as bias persists in what sells.
What to use instead
- Real stories with informed consent, community review, and options for anonymity.
- Abstract or illustrative visuals (icons, data visuals, motion graphics) for sensitive topics - not photorealistic faces.
- Context-rich photography that shows agency, resilience, and day-to-day life, not staged despair.
Policy checklist for NGOs and agencies
- Ban: No AI-generated photorealistic depictions of identifiable children, sexual violence, or specific communities in distress.
- Limit AI visuals to concepts: Use clearly synthetic, non-photorealistic styles for ideas (e.g., icons for "access," "funding," "care").
- Label everything: Mandatory "AI-generated" disclosure on posts, videos, and metadata. No mixing of real and synthetic without explicit labels.
- Bias and dignity review: Preflight checks for stereotypes, racialized cues, and sensationalism. Include local voices in approvals.
- Consent by design: For real images, use plain-language consent, purpose limits, renewal windows, and withdrawal options.
- Human sign-off: Sensitive content requires cross-functional approval (programs, safeguarding, legal, comms).
Technical controls for IT and development teams
- Provenance by default: Implement content credentials (C2PA) for creation and publishing pipelines; verify incoming assets for provenance or watermarks. See the standard at C2PA.
- Model guardrails: Blocklist prompts (e.g., "bruised child," "refugee camp sadness"), set geographic neutrality defaults, and enforce safety filters.
- Procurement controls: Whitelist stock providers and require model cards, licensing clarity, and metadata retention. No assets without provenance or usage rights.
- Detection is not a safety net: AI-detection tools are inconsistent. Prioritize provenance, policy, and human review.
- Data hygiene: Keep synthetic visuals out of training sets for internal comms models unless clearly labeled and bias-checked.
Contract language for vendors and agencies
- Declare any AI use in concepting, production, or post. Prior approval required for synthetic assets.
- No photorealistic depictions of minors, sexual violence, or specific communities in distress. Full stop.
- Require provenance metadata, rights warranties, and indemnity for AI-related misrepresentation or IP claims.
- Right to audit assets and upstream sources.
Crisis playbook (if an AI image backfires)
- Immediate takedown and asset freeze. Preserve originals for investigation.
- Issue a clear correction and apology. Explain what happened and what changes now.
- Notify donors, partners, and affected communities first. Then address public channels.
- Update policy, retrain teams, and publish the fix.
The feedback loop risk
Biased visuals don't stay in one campaign; they spread online and can re-enter training sets, amplifying prejudice. Experts have warned this is "poverty porn 2.0," now produced at scale and disguised as authenticity.
For more on this critique in global health, see the commentary in The Lancet Global Health.
Make the ethical path the easy path
- Set a clear policy that removes ambiguity for creators.
- Build tooling that blocks high-risk outputs and preserves provenance.
- Reward teams for storytelling that respects dignity and accuracy.
Next step for teams
If your organization is updating its AI usage policy, train your PR, comms, and product teams together. Shared standards beat ad-hoc fixes.
Useful place to start: role-based programs on safe AI use and content governance at Complete AI Training.
Your membership also unlocks: