How deepfakes and cloned voices are distorting Europe's elections
Europe heads into packed elections in 2025 and 2026 with a new risk: AI-driven manipulation at scale. The fight for votes is happening on screens through fabricated videos, cloned voices, and convincing imitations of trusted media.
This isn't hypothetical. It's here, and it's organized. Government teams should treat it as a constant threat surface, not a one-off incident.
Moldova: a playbook for cross-border manipulation
In late 2023, a viral video on Telegram falsely showed Moldova's President Maia Sandu disowning her government. The clip, voiced in Russian, was linked by investigators to the Kremlin-connected "Matryoshka" bot network using the Luma AI video platform. It recycled old narratives tied to fugitive oligarch Ilan Shor, packaging them as "new."
France's cybersecurity agency Viginum later described how deepfake videos, including the Sandu imitation, spread through Telegram and TikTok via a pro-Russian network affiliated with Komsomolskaya Pravda. Websites like moldova-news.com were presented as independent news but were part of a coordinated effort, according to Viginum.
Troll factories now look like newsrooms
Researchers at Alliance4Europe report a surge in AI-enabled election influence. What used to be copy-paste spam now appears as fresh articles and comments rewritten by AI, adjusted per audience, and posted across many accounts.
France's Foreign Ministry said Storm 1516 launched 77 Russian disinformation campaigns since 2023. Operations linked to the Russian Foundation for Battling Injustice clone reputable outlets, scrape articles, rewrite or translate them, and republish to build credibility. During European elections, hundreds of such sites were observed.
Personal targets: academics and officials
Smear campaigns aren't limited to candidates. Professor Dominique Frizon de Lamotte was targeted with an AI-generated video faking his image and voice, attempting to link him to pro-Russian groups in Moldova. EUvsDisinfo and French media flagged it as an attempt to erode trust in experts.
Romania: interference and a historic rerun
Officials in Romania reported AI-linked interference during the 2024 presidential election. The Constitutional Court annulled the results, an unprecedented decision in Europe. In the mid-2025 rerun, fabricated content and far-right narratives spread across TikTok and Telegram. Pro-European candidate Nicușor Dan won the repeat vote.
Hungary: the next major test
Ahead of 2026, pro-government groups in Hungary, including the National Resistance Movement, have spent over €1.5 million on unlabelled AI videos targeting opposition leader Péter Magyar. Some clips show fake scenes of Hungarian soldiers dying in Ukraine to trigger outrage. Magyar called the content "pathetic" and "election fraud." Even when viewers suspect a fake, the emotional hit can stick.
What the EU is doing
The European Union has rules for transparency around political advertising and platform accountability under the Digital Services Act. There's also an EU Rapid Alert System and an AI Integrity Taskforce coordinating across borders and languages. Enforcement is improving, but the volume and speed of AI content demand faster responses at national and local levels.
Action plan for government teams
1) Build early warning and monitoring
- Track Telegram, TikTok, and fringe sites alongside mainstream platforms. Don't ignore small channels; they seed narratives.
- Set up alerts for names, offices, and likely issues (mobilization, corruption, migration, war casualties, vote fraud).
- Map known networks (e.g., site clones, bot clusters) and watch for mirrored content across languages.
2) Prebunk, then debunk
- Publish short prebunks on common tactics: voice cloning, out-of-context clips, fake media lookalikes, and doctored war footage.
- Use side-by-side comparisons and clear labels when debunking. Keep it fast, neutral, and verifiable.
- Push prebunks via trusted local channels (municipal pages, SMS lists, radio, community groups).
3) Protect officials' voices, images, and accounts
- Record clean "voice prints" and establish official voice and video baselines for rapid verification.
- Lock down accounts with passkeys or hardware keys. Enforce admin separation and logging.
- Watermark official media and publish originals for provenance checks.
4) Tighten ad and content transparency
- Audit political ad libraries weekly. Flag unlabelled or "issue" ads that avoid political labels.
- Standardize disclosures for AI-generated content used by government channels.
- Coordinate with platforms early for fast takedown lanes during election windows.
5) Train frontline staff
- Give comms teams and local administrators short drills: identify a deepfake, verify, respond in under 60 minutes.
- Teach pattern spotting: cloned news sites, subtle logo changes, machine-translated text tells, and recycled footage.
- Include legal, cybersecurity, police, and election bodies in table-top exercises.
6) Incident response: 6 steps
- Verify: source, metadata, inconsistencies (lighting, lipsync, accents, timestamps).
- Label: "Suspected synthetic media" with clear evidence; avoid amplifying the false claim itself.
- Notify: election authority, platform contacts, and cross-border partners if relevant.
- Contain: request takedowns of originals and mirrors; track re-uploads.
- Counter-message: short, calm update from a trusted spokesperson; direct to verified information.
- Review: log the tactic, channels, and speed; update the playbook within 24 hours.
7) Focus outreach on high-risk groups
- Older voters and regions with low digital literacy need simple cues: "Where to check," "What to ignore," "Who to call."
- Use offline touchpoints: local TV, radio, print inserts, town halls, and community leaders.
- Provide a clear hotline or WhatsApp number for citizens to report suspicious content.
8) Strengthen procurement and policy
- Require synthetic media detection and content provenance features in new comms and monitoring tools.
- Embed disclosure rules for AI-generated materials in agency guidelines.
- Create cross-agency MOUs for sharing signals and issuing joint advisories within hours, not days.
The takeaway
AI makes manipulation cheaper, faster, and harder to spot at first glance. But it's beatable with preparation: monitoring, prebunking, clear response playbooks, and credible local voices.
Set up the systems now. The cost of delay is public trust.
Skill up your team
If your department is building internal training on synthetic media and information risk, explore practical courses by role here: Complete AI Training - courses by job.
Your membership also unlocks: