Deloitte's Botched AI Report Undercuts Trust; Swift's Showgirl Stumbles, CBS/MSNBC Issue Ethics Rules, Domino's Debuts Jingle

Deloitte's Australia report used AI and included fake citations, prompting corrections and a partial refund. For PR, trust is the product-build AI guardrails and own fixes fast.

Categorized in: AI News PR and Communications
Published on: Oct 09, 2025
Deloitte's Botched AI Report Undercuts Trust; Swift's Showgirl Stumbles, CBS/MSNBC Issue Ethics Rules, Domino's Debuts Jingle

Daily Scoop: Deloitte's AI mishap is a reputation case study for PR

Deloitte is under scrutiny after delivering a 237-page report to Australia's Department of Employment and Workplace Relations that contained fabricated court quotes, fake academic citations and untraceable references. The firm acknowledged incorrect footnotes and confirmed it used a generative AI model (Azure OpenAI GPT-4o) to draft parts of the document.

An academic at the University of Sydney publicly flagged the errors. Media reporting then verified the fabricated or incorrect citations, prompting the department to request fixes. Deloitte updated the report and agreed to a partial refund of its $440,000 contract, asserting the corrections did not change core findings.

Why this matters to communications leaders

Trust is your product. AI can speed up work, but it can also scale errors and damage credibility at the same pace.

Downplaying the issue with "the findings are unchanged" invites backlash. Clients, regulators and the public expect accuracy, ownership and clear corrective action-especially from firms that sell certainty.

Build an AI-safe publishing workflow (use this checklist)

  • Policy and ownership: Define approved AI use cases. Ban AI-generated citations and quotes. Assign a single accountable owner for every deliverable.
  • Source integrity: Require primary-source verification for every claim and citation. No unverified references. Keep PDFs, URLs and screenshots as proof.
  • Fact-check pipeline: Add second-editor review, SME sign-off and legal checks on sensitive material. Random audits before publication.
  • Clear disclosure: State where AI assisted and how human verification was done-on documents, in footnotes and in pitches.
  • Provenance logging: Maintain a structured citations log with timestamps and version history. Make it accessible for audits.
  • Red-teaming: Systematically test AI outputs for hallucinations. Maintain a "do not generate" list (legal analysis, medical claims, case law) unless verified by experts.
  • Data and security: Use compliant environments. Disable model training on your data. Limit sensitive uploads.
  • Contracts and safeguards: Add warranties, indemnities, audit rights and penalty clauses with vendors and partners.
  • Crisis playbook: If errors surface, respond within 24 hours: acknowledge, explain, correct, publish a corrections log and timeline, and commit to independent review.
  • Upskill the team: Train staff on AI limits, verification and disclosure. Explore role-based programs at Complete AI Training.

Messaging guidance if your org slips

  • Lead with accountability. Avoid hedging language. Say what happened, how it happened and what you fixed.
  • Publish a public corrections log and the verification standard you'll use going forward.
  • Offer an independent review and timeline. Share updated governance in plain language.
  • Brief internal teams first. Equip them with a concise Q&A so messages stay consistent.

Editor's Top Reads for PR pros

Taylor Swift's "The Life of a Showgirl": hype vs. product

The album shattered streaming and preorder records and drove box office for the tie-in film. Critics say the campaign's world-building set expectations the music didn't fully meet, and a brand strategist labeled the rollout a "flop" from an expectations standpoint-sales aside.

  • Run an "expectation stress test" before launch: map every claim and visual to what the audience will actually hear or get.
  • Pressure-test with a skeptical focus group. Look for gaps between promise and delivery.
  • Plan for blowback scenarios. Have messaging ready if the product leans in a different direction than the teaser narrative.

CBS and MSNBC publish new ethics guidance

With leadership changes and structural shifts, both outlets shared updated principles. MSNBC leans into traditional newsroom ethics (accuracy, transparency, independence, conflicts, AI disclosure), while CBS emphasizes using digital tools and a service-to-America framing.

When outlets publish their ethics, they're telling you how to pitch. Study the guidelines and tailor accordingly. See reporting and analysis at Nieman Lab.

  • Add ethics notes to your media lists (AI disclosure expectations, conflicts, source standards).
  • Include method and sourcing in your pitch brief. Offer documentation up front.
  • Proactively disclose AI assistance and verification steps in contributed content.

Domino's refresh: brighter identity and a new jingle

Domino's rolled out a sharper logo, new uniforms and its first jingle in 65 years, featuring Shaboozey. The goal: instant recall in short-form video feeds where attention is won in seconds. Coverage via CNN Business.

  • Test sonic branding across Reels/Shorts/TikTok. Aim for recognition in under two seconds.
  • Design assets for small screens first. Simplify shapes, color and motion.
  • Measure lift with ad-recall and brand-search deltas, not just views.

Bottom line

Trust is fragile. Put AI guardrails in place, align promise to product and pitch to outlet ethics-not assumptions. The orgs that win will publish with proof, fix fast and speak plainly.