How Brands Can Fight AI-Driven Disinformation and Deepfakes

Fake accounts evade detection, making pinpointing their exact source difficult. Brands must use proactive tools to detect misinformation early and protect their reputation.

Categorized in: AI News PR and Communications
Published on: Jun 24, 2025
How Brands Can Fight AI-Driven Disinformation and Deepfakes

Q&A: AI, Disinformation, and Brand Perception with Cyabra

David Bar-Aharon, Global Director, Private Sector at Cyabra, recently answered key questions following his session on "AI and Reputation: Brand Perception, Disinformation and Deepfakes." The discussion focused on practical concerns PR professionals face regarding misinformation, brand safety, and AI-driven threats. Below are insights from that exchange, originally part of the PRNEWS Pro workshop "AI for PR."

Can You Pinpoint the Origin of Fake Accounts?

Fake accounts are deliberately crafted to avoid detection. While Cyabra can provide approximate locations based on publicly available data, identifying the exact source or funding behind these accounts is extremely difficult. The teams behind misinformation campaigns use sophisticated methods to hide their presence.

Cyabra’s approach involves analyzing patterns tied to different countries and campaigns, offering educated guesses on where disinformation originates. However, pinpointing an exact address or individual is often impossible due to the level of obfuscation these actors employ.

What Tools Help with Proactive Threat Detection and Real-Time Monitoring?

Traditional social listening tools are a good starting point for monitoring brand mentions and public sentiment. Many PR teams or agencies already use these to track conversations.

However, tools like Cyabra focus specifically on threat detection before misinformation gains traction. Unlike solutions that react after a narrative becomes widespread, Cyabra aims to identify even a single fake account trying to push harmful content early on. This proactive approach helps brands and agencies respond before viral damage occurs.

What Size Company Should Be Concerned About Disinformation Threats?

While large international brands are common targets, smaller companies should not assume they are immune. Disinformation campaigns can affect any organization, regardless of size or industry.

Examples include local banks, manufacturing firms, and even small hospitals. One Cyabra client, a hospital with around 100 employees, faced fabricated negative narratives targeting its CEO. The risk exists at every level, so vigilance is necessary across the board.

Will Social Media Platforms Verify Accounts to Curb Bot-Based Misinformation?

True verification across social media platforms remains an aspirational goal. Currently, platforms often tolerate fake accounts because they still contribute to ad views and revenue, regardless of whether the viewer is real or automated.

Cyabra and similar companies are focused on providing private-sector solutions to detect and filter disinformation. The hope is that social platforms will eventually adopt stronger verification and moderation measures, but for now, brands must rely on external tools and strategies.

Is There Less Social Media Moderation Today?

Yes, moderation appears to be decreasing, which increases the burden on brands and communicators to identify misleading content themselves. Educating teams to critically evaluate social media actors and assess agendas behind statements is crucial.

By carefully examining profiles and content, PR professionals can better distinguish between genuine voices and coordinated misinformation campaigns. This effort requires ongoing vigilance and support from companies specializing in this space.

Are AI-Generated Deepfakes a Growing Threat to Brand Reputation?

Deepfake videos have become more sophisticated, but humans often still detect subtle inconsistencies that reveal their artificial nature. Images, however, are already reaching a level of realism that makes it very difficult to tell real from fake.

For this reason, tools that can identify AI-generated images or videos are vital for organizations to verify content they encounter. Effective visual verification helps protect brand reputation from manipulation and false claims.

For PR professionals interested in expanding their knowledge on AI and digital threats, exploring specialized courses can provide valuable skills. One such resource is Complete AI Training’s latest AI courses, which cover tools and techniques relevant to managing AI-driven risks in communications.