"Facts In, Facts Out": Media call on AI companies to make sources transparent
Jan 10, 2026
AI tools are becoming a front door to the news. The problem: what goes in as verified reporting often comes out distorted or stripped of context. An international campaign, "Facts In, Facts Out," is urging major AI companies to take responsibility for transparency, attribution, and fair use.
The effort is led by the European Broadcasting Union (EBU), the World Association of News Publishers (WAN-IFRA), and the International Federation of Periodical Publishers (FIPP). It draws on the BBC/EBU study published in June 2025, which found AI assistants-across markets and languages-regularly alter or misuse content from trusted media sources.
"For all its power and potential, AI is not yet a reliable source of news and information - but the AI industry is not making that a priority," said EBU Director of News, Liz Corbin. WAN-IFRA CEO Vincent Peyregne added: "If AI assistants ingest facts published by trusted news providers, then facts must come out at the other end, but that's not what's happening today."
Why this matters for PR and Communications
More people-especially younger audiences-now ask AI tools for news and quick answers. If those systems remix your organization's statements without source context, you face misquotes, brand confusion, and a slow leak of trust.
This isn't just a media-industry issue. It affects how your announcements, crisis updates, and expert commentary are surfaced, cited, and remembered.
The five principles behind "Facts In, Facts Out"
- No consent - no content: Use of news content in AI tools requires the originator's authorization.
- Fair recognition: The value of trusted news content must be recognized when used by third parties.
- Accuracy, attribution, provenance: The original source behind AI outputs must be visible and verifiable.
- Plurality and diversity: AI systems should reflect the diversity of the global news ecosystem.
- Transparency and dialogue: Tech companies must work openly with media to set standards for safety, accuracy, and transparency.
Practical steps for comms teams now
- Update rights and licensing: Make terms explicit on your site for AI training, summarization, and reuse. Provide contact info for permissions.
- Strengthen attribution signals: Use consistent bylines, clear datelines, canonical URLs, and linkable source pages for statements and fact sheets.
- Publish source-of-truth hubs: Maintain a newsroom page with FAQs, timelines, and citations that AI tools can reference.
- Add provenance cues: Ensure press assets (PDFs, images, video) include visible source info and persistent URLs.
- Monitor AI assistants: Set a weekly check of major assistants for your brand, exec names, and key topics. Log inaccuracies and request corrections.
- Create an escalation path: Define who contacts platforms, how evidence is documented, and expected SLAs for fixes.
- Engage industry bodies: Coordinate with publishers and associations aligned to these principles for stronger leverage.
- Train your team: Brief spokespeople and social teams on how AI distorts context and how to respond quickly.
What to ask your AI vendors and partners
- How do you display source attribution and original links in answers?
- Can publishers and brands opt in or opt out of training and summarization?
- What safeguards reduce decontextualization and fabrication for news-related queries?
- How can we report misattribution, and what's the correction timeline?
- Do you provide usage and referral metrics back to source publishers?
- What indemnities and compliance assurances are in place around content rights?
Messaging for leadership
- We support open dialogue with AI companies to protect accurate, source-linked information.
- We'll label and structure our content so it's easy to attribute and verify.
- We're monitoring AI outputs and correcting errors to safeguard public trust.
- We'll collaborate with industry groups to set clear standards that benefit audiences.
Learn more
Explore the campaign's context and ongoing work from these organizations:
If you're upskilling your comms team on responsible AI and source transparency, see our curated options by role: AI courses by job.
Bottom line
As AI tools mediate more news consumption, the gap between verified reporting and what audiences see will widen-unless platforms commit to visible sources, accurate context, and fair recognition. "Facts In, Facts Out" is an open invitation to build those standards together. Your policies, metadata, and monitoring can make that shift real-right now.
Your membership also unlocks: