EU-Funded Researchers Build AI Tools to Detect AI-Generated Disinformation
Last winter, social media across Europe spread videos claiming that radical Islamists were "invading" Christmas markets. One clip appeared to show disruptions at the Brussels market. Another photo showed heavy security surrounding the venue. The narrative was clear: Christian traditions faced a threat.
The videos came from peaceful demonstrations. The photo was AI-generated. What looked convincing was misleading or entirely fabricated.
Nearly two-thirds of Europeans encountered disinformation or fake news within the previous week, according to a recent European Commission survey. As generative AI tools produce increasingly realistic images, videos and text, distinguishing fact from fiction has become harder.
Fighting Fire With Fire
In 2020, researchers from universities, media organizations and technology companies launched AI4Media, an EU-funded four-year initiative to create AI tools for journalists and fact-checkers to verify digital content quickly.
Generative AI has lowered the barrier to producing convincing fake content. Anyone with access to these tools can now create fabricated images, cloned voices or realistic news articles. Social media platforms amplify that content at speed.
"When a fake story is supported by realistic images, it becomes much easier to believe - and more tempting to share because the content generates higher views," said Yiannis Kompatsiaris, research director at the Centre for Research & Technology Hellas, who coordinated the initiative.
The AI4Media team built verification tools designed to fit directly into newsroom workflows. Media organizations including Deutsche Welle and VRT in Belgium tested them in real-world settings.
Akis Papadopoulos, a researcher at CERTH, described the technology as a "first line of defence" - not a replacement for human judgment, but a way to flag potentially manipulated content quickly.
The European Digital Media Observatory, an independent EU-funded hub monitoring disinformation campaigns across member states, reports that AI-generated disinformation has increased steadily in recent months. Coordinated campaigns can influence elections, distort public debate and undermine trust in institutions.
Tracking How Disinformation Spreads
Identifying manipulated content is only part of the problem. Understanding how disinformation spreads - who amplifies it, how narratives evolve and whether campaigns are coordinated - matters equally.
A parallel project called AI4Trust, led by Fondazione Bruno Kessler in Italy, partnered with universities and media organizations across Europe to analyze the wider dynamics of online disinformation. Partners included Euractiv, Sky Italia, and fact-checking services Maldita.es, Ellenika Hoaxes and Demagog.
While AI4Media focused on detecting manipulated media, AI4Trust built a hybrid human-machine system to monitor and analyze disinformation at scale. Its platform tracks multiple social media and news sites in near real time, using advanced AI algorithms to process multilingual and multimodal content - text, audio and images.
Because the volume of online material far exceeds human capacity, the system filters and flags posts carrying a high risk of being fake. Professional fact-checkers then review this material, and their verified assessments feed back into the system to improve performance.
The two projects are complementary. One detects manipulated content; the other examines how it spreads. Together, they offer both the microscope and the wide-angle lens needed to counter AI-powered disinformation.
The Acceleration Problem
Using AI to detect AI might sound ironic. It is serious business.
"It is indeed funny, but it's like an arms race," Kompatsiaris said.
Generative AI models evolve at extraordinary speed. When AI4Media began, tools like ChatGPT were still in their infancy. Since then, the quality and realism of AI-generated content have advanced dramatically.
"We have entered a new era where the acceleration is hard for the human mind to keep up with," Papadopoulos said. "To keep up with AI, you need to be using AI."
As generative models grow more powerful, detection systems must constantly adapt. The team automated parts of the verification process and regularly retrained their systems. But staying ahead demands continued investment in both research and the media sector that depends on these technologies.
"The technology has progressed so fast that it's difficult even for us as researchers to keep up," Papadopoulos explained. "We had to continuously update our models to detect newly generated images."
Technology Alone Won't Solve This
Detection tools matter, but they're insufficient. Kompatsiaris said: "We need tools, but we also need policies and rules."
The EU's Digital Services Act requires very large online platforms to assess and mitigate systemic risks, including disinformation spread, and increase transparency about how their systems operate. The Artificial Intelligence Act introduces transparency obligations for certain generative AI systems, including requirements to label AI-generated content.
A draft Code of Practice on transparency for AI-generated content aims to encourage clearer disclosure and watermarking standards.
The European Media Freedom Act sets out safeguards to ensure that professional media content is recognized and protected on major platforms. Large platforms must notify recognized media outlets before removing journalistic content and explain their reasoning, giving organizations time to respond.
Public awareness remains vital. "There is no single solution," Kompatsiaris said. "We need a combination of AI tools, transparency, regulation and awareness if we want to be more effective against disinformation."
For writers working in journalism and fact-checking, understanding how generative AI creates fake content is essential. Consider exploring Generative AI and LLM Courses to deepen your knowledge of the technology behind deepfakes and fabricated content. Additionally, AI for Writers covers tools and techniques that can help you identify and report on AI-generated disinformation more effectively.
Your membership also unlocks: