AI-Written Press Releases Take a Credibility Hit, Study Finds
New research from the University of Kansas is a warning shot for PR teams: audiences see AI-authored news releases as less credible and less effective than human-written ones. The difference isn't about style tweaks. It's about trust.
That matters in a climate where trust in media is already fragile. External shocks and political rhetoric have primed the public to doubt institutional messages, so anything that signals automation can make it worse.
What the researchers tested
The study ran a simple split. Half of participants were told a crisis press release was written by a human; the other half were told it was written by AI. The release focused on a fictional chocolate company whose products made people sick due to employee tampering.
- Authorship cue: Human vs. AI.
- Message frame: Informational, apologetic, or sympathetic.
- Two core questions: Can people detect AI? Do perceptions change when authorship is disclosed?
The team summarized their core question: even if people can't reliably spot AI writing, do they rate it differently when it's labeled as AI? The answer was yes.
Key findings you can act on
- Human attribution increased perceived credibility and effectiveness.
- Informational vs. apology vs. sympathy frames showed no meaningful differences in effectiveness.
- Even when participants favored the human-attributed release, they didn't rate it as more sympathetic.
Translation: the "who" beats the "how." In crisis communication, authorship cues drive trust more than tone.
Why this matters for PR and comms leaders
Press releases-especially in crises-stand on credibility, accountability, and clear ownership. If your audience believes a machine wrote the message, they're less likely to trust it. And no amount of apology wording seems to offset that hit.
This aligns with broader trust trends. Audiences are sensitive to signals that a message is impersonal or automated. See the Edelman Trust Barometer and long-term data on media trust from Pew Research Center.
Policy guidance for AI use in PR
- Human ownership: Put a human on the record. Use a named leader as the visible author and spokesperson.
- AI behind the scenes: If you use AI, keep it as a drafting or research tool, not the face of the message.
- Attribution discipline: Avoid "written by AI" labels on external releases. If disclosure is required, emphasize human review and approval.
- Crisis rule: For high-stakes messages, prioritize fully human-written copy with leader quotes and signatures.
- Single source of truth: Publish statements on owned channels with a recognizable human author and consistent updates.
- Legal and ethics: Build a review chain (legal, compliance, security) that's explicit about AI's role and data handling.
- Measurement: A/B test authorship signals and track shifts in perceived credibility, not just clicks.
Practical checklist for crisis releases
- Lead with accountability: who is responsible, what happened, what you're doing now, what changes next.
- Use plain language: strip the corporate varnish. Short sentences. Concrete actions. Dates and specifics.
- Human signature: CEO or relevant executive quotes that own the issue and set timelines.
- Proof of action: remediation steps, third-party audits, compensation or recalls, and a clear follow-up plan.
- Consistent cadence: commit to your next update time and show up.
Where AI can help-without eroding trust
- Research and synthesis: timelines, incident maps, stakeholder questions, regulator requirements.
- Draft variants: generate options to speed internal review, but let humans finalize voice and accountability.
- Media prep: anticipate tough questions and craft human-approved responses.
- Monitoring: flag sentiment and misinformation for the team to address with human statements.
The line is simple: AI can accelerate workflow. Humans must own the message.
Message framing insights you can stop overthinking
The study found no meaningful difference across informational, apologetic, and sympathetic frames. That doesn't mean tone is irrelevant. It means tone alone won't fix a trust gap created by AI authorship.
In practice, combine clarity (facts and actions) with accountability (who's responsible) and compassion (who was harmed and what support they'll get). Then deliver it from a human leader.
How to brief your team
- Define "AI acceptable use" for content: where it's allowed, where it isn't, and required human approvals.
- Create AI-to-human handoffs: prompts, draft limits, red flags that trigger legal/compliance review.
- Standardize authorship: public-facing releases list a human author; internal logs record AI assistance.
- Train for speed with judgment: tools are useless without editorial standards and crisis protocols.
Bottom line
People don't want a bot speaking for you-especially in a crisis. They want a responsible person with facts, actions, and a timeline.
Use AI to research, draft, and monitor. Put a human in front of the message to protect credibility.
Further reading and training
- Edelman Trust Barometer
- Pew data on U.S. trust in media
- AI courses by job: build safe, practical workflows for comms teams
Enjoy Ad-Free Experience
Your membership also unlocks: