Earned Media Leads Generative AI Citations as Press Release Citations Surge 5x, Muck Rack Report Finds

AI cites non-paid sources 94% of the time, with earned at 82%. Press release citations are up 5x since July 2025, especially via ChatGPT and Gemini.

Categorized in: AI News PR and Communications
Published on: Dec 04, 2025
Earned Media Leads Generative AI Citations as Press Release Citations Surge 5x, Muck Rack Report Finds

Earned Media Still Drives AI Citations. Press Release Visibility Is Surging.

A new Muck Rack report shows generative AI still leans heavily on non-paid coverage. About 94% of citations come from non-paid sources, with earned media alone at 82%. At the same time, press release citations are up 5x since July 2025, boosted by higher pickup in ChatGPT and Gemini. If you work in PR or comms, this is your map for where to put your effort next.

Key stats at a glance

  • 94% of citations are from non-paid sources; earned media accounts for 82%.
  • Journalism is consistently influential, contributing 20-30% of citations.
  • There's a targeting gap: only 2% overlap between the most pitched journalists and the journalists most cited by AI for a brand.
  • Press release citations grew 5x since July 2025, driven by ChatGPT and Gemini.
  • Recency matters: half of citations come from content published within the last 11 months; ~4% come from the prior week.
  • Models are citing Wikipedia and large consulting firms less; third-party corporate/blog citations dropped from 37% to 24% since July.
  • Top-cited outlets vary by model. Examples include U.S. News & World Report, Nature, Yahoo Finance (Claude), Reuters, The Verge, The Guardian (ChatGPT), Forbes, Investopedia, and NerdWallet (Gemini).
  • Owned content performs best for precise, fact-based brand questions; for discovery queries, AI leans more on earned media and journalism.

Why press releases are getting cited more

AI models prefer releases with clear structure and useful detail. The releases that get cited most often include strong data and scannable formatting. In short: make it easy for models to extract facts.

  • 2x as many statistics on average
  • 30% more action verbs
  • 2.5x as many bullet points
  • 30% higher rate of objective sentences
  • More unique companies and products mentioned

A practical playbook for PR and comms

  • Rebuild your media list around AI influence. Identify which journalists and outlets AI cites for your category and compare that list to who you pitch. Close the 2% overlap gap with targeted outreach.
  • Publish with recency in mind. Ship useful updates on a steady cadence and refresh high-performing explainers, FAQs, and spec pages. Aim for meaningful new information, not filler.
  • Optimize press releases for citation. Lead with facts, numbers, and outcomes. Use bullets for clarity, add clean subheads, keep adjectives light, and include named entities (products, partners, customers) where appropriate.
  • Prioritize earned for discovery, owned for facts. Pitch journalists for context and comparisons; use owned pages for exact claims, timelines, pricing, and technical details.
  • Distribute intelligently. Host releases on your site with clean markup and distribute via credible channels that AI models tend to read.
  • Measure what matters. Track AI citation share of voice, outlet/journalist overlap with your pitch list, recency index (median age of citations), and release citation rate by model.
  • Monitor models, not just media. Citation behavior differs across ChatGPT, Claude, Gemini, and others. Review patterns monthly and adjust.

Models don't agree on sources

Source preferences shift by engine and over time. That's why your outreach mix should be diversified across outlets and formats. For reference, outlets frequently cited in the analysis include Reuters and Nature, among others. Treat each AI model as its own distribution channel with its own "media diet."

Quote worth noting

"Earned media still influences how AI understands brands. The gap between who PR teams pitch and who AI cites is striking-only a 2% overlap. Teams that close that gap first will have the metrics to prove their impact." - Greg Galant, cofounder and CEO of Muck Rack

Methodology (short version)

The study analyzed more than one million links cited by web-enabled AI models between July and December 2025. Researchers ran a large, varied prompt set across ChatGPT, Claude, Gemini, and Perplexity, then reviewed responses and linked sources with a consistent process. Models change frequently, so citation behavior may shift as vendors update systems.

Get the full report

Read Muck Rack's What Is AI Reading? December 2025 report, powered by Generative Pulse, for model-specific and industry-specific findings: generativepulse.ai/report.

If your team needs to upskill on AI search and GEO fundamentals, explore practical training paths by role: Complete AI Training - Courses by Job.

About Generative Pulse

Generative Pulse helps PR and communications teams monitor and define how their brands appear in AI-generated search results. Built around Generative Engine Optimization (GEO), it highlights which journalists, outlets, and sources influence models like ChatGPT, so teams can guide brand visibility in an AI-native search environment. Integrated into Muck Rack's PR platform and backed by a $180M Series A financing.

About Muck Rack

Muck Rack provides award-winning PR software with accurate, comprehensive media data contributed by journalists themselves. The platform brings together global monitoring, reporting, collaboration, pitching, and measurement for nearly 6,000 companies. Thousands of journalists use Muck Rack's free tools to build portfolios, analyze news, and track impact. Learn more at muckrack.com.

Media contact

Bailey Mark
Senior Communications Manager
bailey.mark@muckrack.com


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide