How Pro-Russia Disinformation Campaigns Are Using Free AI Tools to Flood the Internet with Fake Content

A pro-Russia campaign uses free AI tools to flood social media with fake content targeting global issues, especially Ukraine. The effort spreads via hundreds of channels and manipulates videos, images, and texts.

Published on: Jul 02, 2025
How Pro-Russia Disinformation Campaigns Are Using Free AI Tools to Flood the Internet with Fake Content

A Pro-Russia Disinformation Campaign Leveraging Free AI Tools to Amplify Content

A pro-Russia disinformation campaign has been using widely available AI tools to massively increase the volume of misleading content. The campaign, known under names like Operation Overload and Matryoshka, has been active since 2023. Multiple organizations, including Microsoft and the Institute for Strategic Dialogue, link it to the Russian government.

The operation focuses on spreading false narratives by impersonating legitimate media outlets. Its goal is to deepen divisions within democratic societies by targeting sensitive topics such as global elections, the Ukraine conflict, and immigration. While the campaign targets audiences worldwide, Ukraine remains its primary focus.

AI Tools Fuel a Content Explosion

Between September 2024 and May 2025, the amount of content produced by this campaign surged dramatically. Researchers identified 230 unique pieces of content from July 2023 to June 2024, ranging from images and videos to QR codes and fake websites. However, in the last eight months alone, Operation Overload generated 587 unique pieces, mostly created with free AI tools.

This spike is attributed to “content amalgamation,” where AI helps create multiple pieces pushing the same narrative efficiently. According to experts from Reset Tech and Check First, this signals a shift to more scalable, multilingual, and sophisticated propaganda methods.

Diverse Content and Accessible AI Tools

The campaign’s content style has diversified, layering different formats to approach stories from multiple angles. Surprisingly, no custom AI tools appear to be used. Instead, the operation relies on consumer-grade AI voice and image generators accessible to the public.

One notable tool is Flux AI, a text-to-image generator developed by Black Forest Labs. Researchers found a 99% probability that many fake images—such as those depicting migrant riots in European cities—were generated using Flux AI. By inputting discriminatory prompts, the campaign amplifies racist and anti-Muslim stereotypes.

Black Forest Labs has stated they implement safeguards to prevent misuse of their technology, including provenance metadata for content identification. Yet, researchers noted the manipulated images lacked such metadata, complicating detection.

AI-Driven Video Manipulation

Voice cloning technology has been used to create fake videos of public figures. The number of videos produced jumped from 150 between June 2023 and July 2024 to 367 between September 2024 and May 2025. Most recent videos use AI to convincingly misrepresent individuals.

For example, a manipulated video showed a French academic appearing to urge German citizens to riot and support the far-right AfD party. The original footage was repurposed with AI-generated voice overlays to spread false political messages.

Wide Distribution Across Multiple Platforms

Operation Overload spreads content through over 600 Telegram channels and bot accounts on social media platforms like X and Bluesky. Recently, the campaign expanded to TikTok, where 13 accounts posted videos viewed 3 million times before being removed.

TikTok states it actively removes accounts involved in such covert influence operations. However, Bluesky has suspended many fake accounts, while X has taken minimal action despite repeated reports and evidence.

Unusual Tactic: Alerting Fact-Checkers

In a strange move, the campaign emails hundreds of media and fact-checking organizations, presenting examples of their fake content and urging investigations. This tactic aims to get their content amplified by legitimate outlets, even if labeled as fake.

Since September 2024, up to 170,000 such emails have been sent to over 240 recipients. Interestingly, while the fake content is AI-generated, the emails themselves do not appear to be written by AI.

AI Tools in Disinformation: A Growing Concern

Pro-Russia disinformation groups have long experimented with AI to boost their output. Similar campaigns have used large language models to create fake news sites that appear authentic. These efforts often gain traction through social media promotion, sometimes reaching top search results.

A recent estimate suggested Russian disinformation networks produce at least 3 million AI-generated articles annually. This flood of content can also poison AI chatbot responses, complicating the fight against misinformation.

Experts warn that as AI-generated content becomes harder to distinguish from real, disinformation campaigns will continue to escalate. The tools and methods are already well-established, making vigilance essential.

What This Means for PR and Communications Professionals

The rise of AI-powered disinformation campaigns presents new challenges for those in public relations and communications. Fact-checking and content verification must adapt to the growing sophistication of AI-generated fake content.

Awareness of AI’s role in spreading false narratives is critical. Professionals should monitor emerging AI tools and understand how they can be misused. Staying informed can help organizations better protect their reputations and counter misinformation effectively.

For those interested in learning more about AI tools and their implications, resources like Complete AI Training offer courses on AI applications and ethics.