Facts In, Facts Out: Global Media Urge AI to Get News Right and Show Their Sources

AI assistants warp news, eroding trust. The EBU, WAN-IFRA, and FIPP are pushing for consent, clear attribution, and accountability-PR teams need to monitor and verify.

Categorized in: AI News PR and Communications
Published on: Jan 10, 2026
Facts In, Facts Out: Global Media Urge AI to Get News Right and Show Their Sources

"Facts In, Facts Out": What PR and Comms Pros Need to Know About the Global Push for AI Transparency

AI assistants are increasingly where people get their news. The problem: too often, the facts come out distorted, stripped of context, or misattributed. That erodes trust-both in journalism and in the brands quoted in those stories.

Global media bodies have launched "Facts In, Facts Out," a campaign pressing AI companies to make source transparency and responsible use of journalistic content a priority. It's being led by the European Broadcasting Union (EBU), the World Association of News Publishers (WAN-IFRA), and the International Federation of Periodical Publishers (FIPP).

What's driving the campaign

The BBC and EBU's News Integrity in AI Assistants report (June 2025) found a clear pattern: across countries, languages, and platforms, AI tools alter, decontextualize, or misuse news from trusted sources. That's a direct hit to accuracy and public confidence.

"For all its power and potential, AI is not yet a reliable source of news and information - but the AI industry is not making that a priority," said EBU Director of News, Liz Corbin. WAN-IFRA CEO Vincent Peyregne put it bluntly: "If AI assistants ingest facts published by trusted news providers, then facts must come out at the other end, but that's not what's happening today."

As more people-especially younger audiences-treat AI answers like headlines, the stakes rise for every communicator whose quotes, data, or brand mentions travel through these systems.

Why it matters for PR and Communications

Distorted output from AI assistants can misframe your message, misattribute your statements, or circulate outdated details at scale. That complicates reputation management and crisis response.

Transparent sourcing isn't a newsroom-only issue. It's a brand safety issue. You need to know where your words end up, how they're presented, and whether the original source is visible and verifiable.

The five principles the campaign is asking AI companies to adopt

  • No consent - no content: Use news content in AI tools only with the originator's authorization.
  • Fair recognition: Value from trusted news content must be recognized when used by third parties.
  • Accuracy, attribution, provenance: The original source behind AI-generated content should be visible and verifiable.
  • Plurality and diversity: AI systems should reflect the diversity of the global news ecosystem.
  • Transparency and dialogue: Tech companies must engage openly with media to develop shared standards for safety, accuracy, and transparency.

Action steps for PR teams right now

  • Make your source of truth unmistakable: Ensure press releases, newsroom posts, and media kits clearly state the original source and include consistent bylines, timestamps, and canonical links.
  • Set your terms: Review and tighten content usage policies, including guidance on AI scraping, attribution, and consent for reuse.
  • Ask for attribution in writing: Bake source visibility and link-back requirements into contracts with platforms, syndication partners, and AI vendors.
  • Monitor AI answers: Track how major assistants summarize your news and statements. Flag inaccuracies early and document patterns.
  • Pre-write corrections and clarifications: Have a fast path to challenge misattribution or decontextualized quotes-including who to contact and what evidence to provide.
  • Strengthen media relationships: Share clear fact sheets and context notes with journalists to reduce room for distortion downstream.
  • Train your team: Build AI literacy so staff can spot issues, interpret model outputs, and respond effectively. If helpful, explore job-specific AI training options here: AI Courses by Job.

What good looks like

Credible inputs, credible outputs. That's the core idea. If AI companies ingest verified reporting, the output should preserve the facts, attribute the source, and show the path back to the original.

The campaign isn't about blame. It's an open invitation to set shared standards so the public can access reliable journalism-no matter which tool they use.

Learn more

Bottom line for PR: Treat AI source transparency as part of your brand's risk posture. Push for attribution, verify what the models say about you, and keep your source-of-truth airtight.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide