AI-Generated Images Target Mayor Zohran Mamdani - A Clear Warning for Government
Mayor Zohran Mamdani on Wednesday addressed a wave of fake, AI-generated photos circulating online that falsely place him and his mother, filmmaker Mira Nair, alongside Jeffrey Epstein and others. The images appeared after the U.S. Department of Justice released new Epstein files and were reportedly created by an account calling itself an "AI-powered meme engine." Multiple outlets have noted the images may have used an AI tool developed by Google. "At a personal level, it is incredibly difficult to see images that you know to be fake… and yet can reach across the entirety of the world in an era of misinformation," Mamdani said.
This is not the first AI hit on Mamdani. During last year's mayoral campaign, Andrew Cuomo's team briefly ran an AI-generated ad showing Mamdani in a series of inflammatory images; the campaign later said it was posted in error.
Gov. Kathy Hochul has since proposed a ban on generative AI in political campaigns in New York. Mamdani said he also discussed AI policy in city schools with the schools chancellor Wednesday, as the education department prepares updated classroom guidance later this month.
Why this matters for public officials
- Deepfakes now move faster than official corrections, and they're persuasive at a glance.
- Political operations, agencies, and schools are all exposed: a forged photo can distort public perception, chill participation, and erode trust.
- Campaign rules, procurement standards, and classroom policies need to anticipate synthetic media, not chase it.
Immediate actions for city, state, and agency leaders
- Publish a synthetic media policy: require disclosure labels for any AI-generated content produced or paid for by your office; ban deceptive uses outright.
- Stand up an incident response playbook: who verifies the claim, who drafts the statement, which channels push corrections, and which legal pathways are triggered.
- Adopt content authenticity and provenance: use signing/watermarking on official media and verify inbound media where possible. See the C2PA standard.
- Update procurement: mandate model transparency, content provenance support, and clear red-teaming/abuse reporting in all AI-related contracts.
- Staff training: teach comms, legal, and frontline teams how to spot manipulated media, verify source files, and escalate quickly. Don't rely solely on detectors.
- Election-season safeguards: coordinate with law enforcement, platforms, and campaigns on reporting channels for synthetic smear campaigns.
- Schools: pair AI literacy with clear classroom rules on tool usage, disclosure, and plagiarism; align with district-wide guidance.
- Legal posture: map applicable state and city statutes on impersonation, consumer protection, defamation, and election law; be ready to act.
- Transparency habit: release originals (photos, videos, transcripts) quickly to create a verifiable record that outpaces falsehoods.
Build durable guardrails
- Advance state and city laws that require clear labels for AI-generated political ads and establish penalties for deceptive synthetic media.
- Standardize authenticity signals across agencies (hashing, watermarks, provenance metadata) and publish verification guidance for the public.
- Adopt the NIST AI Risk Management Framework for AI governance across departments.
- Fund monitoring capacity during high-risk windows (elections, major policy launches) and set MOUs with platforms for expedited takedowns of deceptive forgeries.
If your team needs structured, practical AI upskilling, see our curated options by role at Complete AI Training - Courses by Job.
The bottom line
As Mamdani suggested, a lie can lap the field before the truth gets moving. Set clear rules, prepare your incident playbook, and give the public fast, verifiable sources-so your facts travel faster than fakes.
Your membership also unlocks: