AI summaries can't protect you from media bias - here's what PR and Comms need to do now
Search results are getting replaced by AI summaries. One study highlighted by The Guardian warned that this shift can cause up to 80% fewer click-throughs to news sites. Less traffic means fewer chances to add context, correct the record, or get your message in front of decision-makers.
Here's the bigger risk: bias. As Tim Graham of Media Research Center noted in a recent interview, AI summaries pull from sources the systems see as "trustworthy," even if those sources lean one way. Dr. Mrinal Chatterjee adds a blunt point: "Present-day AI can't filter bias⦠it comes down to human beings to filter bias by using their common sense."
Why AI summaries miss the mark
- Source dependence: Summaries mirror the lean of the sources they ingest and rank, not objective reality.
- Probability over truth: LLMs predict plausible text; they don't verify facts or weigh context like a newsroom editor.
- Framing loss: Nuance, caveats, and time-sensitive updates get flattened into one confident paragraph.
- Attribution gaps: Your data and quotes may appear without links, starving you of traffic and control.
What this means for PR and Communications
- Misframing risk: Product news, policy positions, or crisis statements can get "summarized" into a skew that sticks.
- Zero-click distribution: Your primary sources are read through someone else's lens with little incentive to click.
- Amplified narratives: If coverage tilts, AI will often reflect and reinforce it.
Bias-check framework for your team
- Source triage: Maintain an internal map of outlets by lean and reliability. Use third-party tools like AllSides for a baseline, then refine with your own experience.
- Summary sanity check: For major announcements or crises, have a designated reviewer compare AI summaries (where visible) to your primary materials. Log drift, missing context, and misquotes.
- Context control: Publish a concise fact sheet, timeline, and FAQ with clear headers. The clearer your structure, the less room for misinterpretation.
- Citations everywhere: Link to primary data, full reports, and raw numbers in every release and post. AI systems and journalists need anchors.
- Structured data: Mark news posts with NewsArticle schema and include dates, named sources, and claim/evidence sections. This helps machines extract correctly.
Press material tactics that reduce skew
- Lead with verifiable facts: Put the "hard numbers" up top. Avoid adjectives that invite reframing.
- Quote discipline: Short, unambiguous quotes that stand on their own. Provide a secondary quote for context so cherry-picking hurts less.
- One-page brief: A single canonical URL with summary, data points, and links out. Make it the source of truth you reference everywhere.
- Visuals with captions: Include alt text and captions that state facts, not spin. AI often pulls from image metadata and nearby text.
Monitoring and response
- Bias watchlist: Track outlets by topic and lean. Flag repeat misframers and preempt with tailored clarifications.
- Zero-click metrics: Monitor branded search impressions vs. click-through, referral mix shifts, and the ratio of mentions without links.
- Rapid corrections: Keep a templated clarification note ready. When summaries omit context, issue a short, linkable correction that references your canonical page.
Educate your stakeholders
Executives need to know that AI summaries can distort intent. Set expectations: even accurate coverage can turn inaccurate in summary form. Train spokespeople to keep quotes tight, factual, and hard to twist.
If your team needs structured upskilling on AI in communications, explore practical courses by job function here: AI courses by job.
Crisis playbook addendum for the AI era
- Pre-brief key outlets with your one-page brief and data room links before a volatile announcement.
- Publish the source of truth first, then pitch. Own the narrative anchor that AI will likely sample from.
- Update cadence: Timestamp updates and keep a changelog. Summaries often miss "what changed."
- Escalation grid: If a major summary misleads, escalate to the platform and the top three outlets shaping it, in parallel.
About "bias filters" and current tools
Experts are clear: current AI can't reliably filter bias. Human judgment, diverse sourcing, and transparent evidence still do the heavy lifting. Some organizations are experimenting with bias trackers and "offender" lists, and the White House has referenced an online portal to flag false or misleading stories in the U.S., but practical day-to-day defense still sits with your team's process.
Action checklist
- Create a bias map of your media universe and refresh it quarterly.
- Ship a canonical, structured, link-rich source for every major story.
- Standardize a summary review step for critical releases.
- Track zero-click indicators and missing-attribution mentions.
- Keep a corrections template and contact list ready for fast outreach.
- Train spokespeople to use quotable, factual, context-rich soundbites.
AI summaries aren't a neutral layer. Treat them like another outlet with its own blind spots. If you reduce ambiguity, foreground evidence, and maintain fast feedback loops, you'll keep more of your message intact as it travels.
Your membership also unlocks: