AI, Authenticity, and Trust: A Practical Playbook for PR and Communications
People can feel "off" when a message sounds like it came from a machine. In PR and communications, that gut check matters more than ever. Authenticity and emotional intelligence are the work. If those slip, trust slips.
There's even a name for the unease when something familiar feels like an imposter: Capgras syndrome. That "ick" now shows up in inboxes, DMs, and feeds when AI sneaks into personal or sensitive messages without context.
The trust problem PR must solve
Public concern about manipulation through AI is high. Surveys show majorities are wary of AI's impact on daily life and trust in information.
At the same time, trust in people and institutions has trended down for decades. If audiences suspect AI is ghostwriting apologies, pitches, or personal notes, they'll discount the message-and the messenger.
For PR teams, the takeaway is simple: use AI for efficiency, but design communications so people still feel human intent, effort, and ownership.
Where AI quietly changes your work
- Press releases, pitch emails, and briefing books
- Executive letters, apology statements, and memorial notes
- Social posts, comment replies, and community management
- Images and video that look real enough to raise doubt
As generative tools improve, more content will feel "good enough." That's the risk. "Good enough" can read as generic, oddly emotionless, or too perfect-especially in high-stakes moments.
Principle: disclose, authenticate, and design for trust
You don't need to ban AI. You need policies that protect relationships.
1) Set clear disclosure rules
- Always disclose: apology statements, crisis updates, executive condolences, sensitive employee notes.
- Sometimes disclose: press releases, blog posts, boilerplates-when AI drafted significant portions.
- Never undisclosed: personal outreach presented as purely human, like heartfelt letters or one-to-one reconciliation notes.
Sample line: "Drafted with AI assistance. Reviewed and approved by [name, title]." Keep it short and visible.
2) Add provenance and watermarking
- Adopt content provenance standards (e.g., C2PA) to attach creation metadata.
- Enable watermarking for AI images and video when the tool supports it.
- Store hashes and version history for audits and media inquiries.
This won't stop every bad actor, but it signals intent and gives your team artifacts when questions arise.
3) Build channel-level signals
- Add a short AI policy on your newsroom page.
- Note AI assistance in email footers for automated newsletters.
- Use post-level labels on owned channels when AI had a material role.
4) Keep humans in the loop
- Define "human sign-off" for all sensitive communications.
- Red-team critical messages for tone, empathy, and unintended claims.
- Create a short checklist: purpose, audience, source credibility, review path, disclosure decision.
5) Measure the trust impact
- Track replies that question authenticity and route them to comms leads.
- Run periodic sentiment checks with key stakeholders and media.
- A/B test disclosure lines; monitor engagement and complaints.
- Watch deliverability-spam filters can punish AI-like text patterns.
6) Train your team on AI etiquette
- Set rules for prompts, fact-checking, and voice consistency.
- Define "no-go" use cases where AI is off-limits.
- Teach detection basics for AI text, image, and video.
If you need structured upskilling for comms roles, see practical course lists by job at Complete AI Training.
Policy signals to watch
Federal guidance now requires U.S. agencies to disclose AI use in certain contexts, a model private organizations can borrow. Expect more discussion of provenance requirements and consumer transparency across industries.
For comms leaders, prepare now: align your policy with emerging standards, document your approach, and make it easy for audiences to understand how you use AI.
Spotting AI in the wild (and protecting your brand)
- Text: repetitive cadence, vague qualifiers, confident claims without sources, oddly formal empathy.
- Images/video: inconsistent lighting, distorted jewelry or hands, mismatched reflections, flickering details.
- Audio: smoothed breaths, abrupt phrasing, unusual sibilance.
Have a playbook for verification: provenance checks, reverse image/video search, internal SME review, and a rapid public correction path if something slips.
Keep a human core
Set "human-only" zones-apologies, crises, grief communications, and relationship repair. Require voice notes or live edits from leaders on high-stakes drafts to keep tone grounded.
AI can help with structure and speed. Your job is to keep the message unmistakably human.
Useful references
- Concern about AI and information quality: Pew Research Center
- Open content provenance standard: C2PA
Your membership also unlocks: