Use AI, Don't Hide It: Transparency, Values, and the New PR Playbook

AI is tripping up brands with hidden AI content, shaky bots, and data risks. Own it: label AI use, keep humans in the loop, and respond fast while staying true to your values.

Categorized in: AI News PR and Communications
Published on: Nov 04, 2025
Use AI, Don't Hide It: Transparency, Values, and the New PR Playbook

AI Crises on the Rise: How Brands Can Stay Transparent and True to Their Values

AI is showing up in comms plans, content pipelines, and customer touchpoints faster than many teams can govern it. In a recent discussion, Sarah Evans (Zen Media), Gab Ferree (Off the Record/Your Comms BFF), and Kaylee Hultgren (PRNEWS) outlined where brands are slipping-and how PR can lead with clarity and control.

The three AI crisis patterns hitting brands now

  • Synthetic creative backlash: Audiences feel misled when AI-generated images, copy, or video are published without clear disclosure. The tension spikes when a brand's identity relies on human craft.
  • Automation gone sideways: AI agents and chatbots pushed live without guardrails. Think the McDonald's drive-through voice AI that melted down in public-great intent, rough execution.
  • Data and model risk: Breaches, biased outputs in recruiting/HR tools, and the constant hum of deepfakes and misinformation. These aren't just tech glitches-they're reputation hits.

Make disclosure your default

"Everyone is using AI" isn't a defense. As Sarah Evans noted, speed without governance is the issue. And as Gab Ferree put it: don't pretend you're not using it-be specific about how you are.

  • Disclose when AI meaningfully shaped creative, copy, images, or decisions.
  • Use clear labeling (e.g., "AI-assisted copy," "AI-generated visuals, edited by our team").
  • Avoid inflated claims about AI performance or accuracy. See the FTC's guidance on truth-in-advertising for AI claims here.
  • Consider content provenance standards (C2PA) to verify and label assets here.

Audit for brand-value fit before you ship

The J.Crew example raised by Ferree is the point: if your brand stands for "authentic, American, crafted," an AI-heavy ad can feel off. Use this quick test:

  • Fit: Does AI strengthen or contradict your positioning?
  • Disclosure: Would labeling increase trust or create confusion?
  • Give-back: If AI saves budget or time, where does that value go-talent, training, community? Tell that story.

Automation guardrails that prevent headline risk

  • Scope it: Define "no-go" tasks for AI (legal advice, medical claims, sensitive HR comms).
  • Human-in-the-loop: Require human review for any public-facing content or high-stakes decision.
  • Fail-safes: Set confidence thresholds, escalation to humans, and obvious opt-outs.
  • Pilot quietly: Dark launch to limited audiences, then expand with monitoring.
  • Train for the edge cases: Scripts for misfires, hallucinations, or offensive outputs.

Data and model risk controls your comms team should push

  • Tool inventory: List every AI system used across marketing, service, HR, and vendors.
  • Data minimization: Keep PII and sensitive data out of general-purpose models.
  • DPAs and red-teaming: Vendor agreements that cover data use; regular bias and abuse testing.
  • Content provenance: Watermarking and detection for your own outputs; playbooks to verify suspect media.
  • IR playbook: A single, cross-functional plan for AI-related incidents: who leads, what to say, how to contain.

Message map for AI incidents

  • Acknowledge fast: What happened, who's affected, what you're doing right now.
  • Disclose AI's role: What was automated, what was not.
  • Own the oversight: Where the control failed and how you're fixing it.
  • Concrete remedies: Turn off features, increase human review, tighten data flows.
  • Make-good: Refunds, credits, direct outreach, or policy changes that reflect your values.

How to talk about AI without eroding trust

  • Publish a public AI use statement: What you use AI for, what you won't, and how you protect people.
  • Show the trade: If AI saved cost/time, show the reinvestment (training, creators, accessibility, customer support).
  • Credit humans: Call out writers, designers, photographers, and editors when they lead.
  • Use precise labels: "AI-edited photo" is clearer than vague "AI content."

Monitoring checklist for PR teams

  • Track sentiment on keywords tied to your brand plus "AI," "automation," "bot," "deepfake," "fake."
  • Watch creator and employee chatter-first signals show up there.
  • Instrument AI touchpoints: error rates, escalation volume, opt-out rates, and time-to-human.
  • Run quarterly drills: deepfake scenario, automation fail, and "hidden AI" disclosure crisis.

Quotes to keep close

  • Sarah Evans: brands are "experimenting faster than they're governing." Treat AI as a reputation issue, not just a tech project.
  • Gab Ferree: don't hide AI-explain it and link it to your values. If you saved money, show where you invested it.

Where to go from here

These takeaways come from "AI-Driven Crises: Preparing for the New Risk Landscape," a session in PRNEWS PRO's Online Training Workshop "Brand Reputation and Crisis Comms in the AI Era." Watch the full session at this link.

If your team needs structured upskilling, browse role-based AI courses that support comms, content, and marketing teams at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)