AI-Generated Propaganda Is Flooding Global Politics. Here's What It Means for Trust
Governments, state actors and private individuals are flooding social media with AI-generated videos, images and text designed to manipulate public opinion during conflicts and elections. The White House mixed real footage with movie clips in March to document strikes on Iran. Iran responded with AI-generated videos of attacks on Tel Aviv and US bases. Donald Trump has posted AI videos of himself piloting fighter jets and defecating on protesters.
Researchers have coined the term "slopaganda" to describe this phenomenon: AI-generated content created specifically to serve propaganda purposes. Unlike traditional misinformation, slopaganda doesn't always aim to deceive. It works through emotional resonance and repeated exposure, shaping how people feel rather than what they believe.
How Slopaganda Penetrates Public Defenses
Slopaganda succeeds because it targets distracted audiences on social media, where people scroll quickly and don't verify sources. The content is designed to be attention-grabbing and emotionally arresting-usually negative.
A second mechanism is more insidious: slopaganda dilutes the information environment with falsehoods and half-truths. Generative AI tools can produce content indifferent to accuracy, and slopaganda weaponizes this capability. The Iranian Lego videos depicting Trump alongside Satan figurines don't claim to be real. Instead, they create associations-linking the US with evil, for example-through symbolic and emotional content.
Some slopaganda does mislead directly. Deepfakes created during crises can spread rapidly when people want information but authoritative sources are scarce. Once false associations enter someone's mind, they're difficult to remove. Even small misleading effects across large populations can shift election results, protest movements or public sentiment about military conflicts.
The Erosion of Shared Truth
A third consequence threatens institutions themselves. As slopaganda becomes more prevalent, people will grow better at spotting it-but they'll also misidentify authentic content as fake. Public trust in genuinely trustworthy sources will decline.
When identifying trustworthy sources becomes difficult or impossible, people default to believing whatever feels comfortable or infuriating. In polarized societies facing economic, political and environmental crises, the breakdown of shared information sources accelerates existing divisions.
Three Practical Responses
Individual action: Develop digital literacy skills. Look for technical signs of AI generation in text, images and video. Check sources rather than glancing at headlines. Block accounts that routinely spread slopaganda instead of evaluating each piece in isolation.
Industry standards: Implement watermarking to identify AI-generated content. Remove slopaganda from platforms where people access news and critical information.
Corporate accountability: Hold companies like OpenAI, Google and X responsible through taxation and regulatory oversight. Fund both regulatory efforts and digital literacy education.
Slopaganda won't disappear. But with foresight and sustained effort, organizations and individuals can adapt to it-and limit its spread. For communications and government professionals, this means treating information verification as operational infrastructure, not an afterthought.
Learn more about generative AI and its capabilities, or explore how AI affects PR and communications strategy.
Your membership also unlocks: