PR Firms Face New Pressure to Disclose How They Use AI
The Bulleit Group, a San Francisco-based communications consultancy, is reintroducing its open-source AI policy as clients increasingly demand transparency about how PR agencies deploy generative AI in their work.
The policy, first developed in 2023, outlines standards for responsible AI use across research, messaging, content development, and media strategy. The firm has applied it across client programs and released it publicly to establish an industry standard.
The Accountability Gap
Generative AI is already embedded in PR workflows. Most firms use it for research acceleration, early-stage drafting, and content analysis. But clients often have limited visibility into which tools are being used, how data is handled, or what safeguards exist.
"AI is already influencing how companies are represented in the market," said Kyle Arteaga, CEO of The Bulleit Group. "The gap is not adoption. It's accountability. Most companies don't know how their PR agency is using AI or what controls are in place."
Companies vetting PR firms now ask direct questions: How is AI applied to messaging and media strategy? What risks does it introduce? What protections ensure accuracy and data security? Agencies unable to answer clearly risk losing business.
The Framework: Risk-Based Oversight
The Bulleit Group's approach uses a risk-and-reward ladder. Lower-risk uses-internal research, for example-require limited oversight. Higher-risk uses tied to external communications demand strict human review and validation before distribution.
At the firm, AI may assist with early drafting or analysis. It does not replace strategic judgment or final deliverables. All external communications undergo rigorous review by experienced practitioners before reaching media.
Real Risks in PR
Unreviewed AI output creates tangible problems. Hallucinated information, undisclosed AI-generated content, and data exposure through improper tool use can introduce errors into media coverage and damage client credibility.
The framework emphasizes three core principles: transparency in AI use, clear accountability for final outputs, and safeguards protecting client data.
An Industry Standard, Not a Competitive Advantage
The Bulleit Group released the policy under a Creative Commons license. The firm's position: AI governance in PR should not be proprietary, particularly in a function that shapes public perception.
As AI governance shifts from optional to required, expectations now extend beyond model developers to agencies using AI in real-world client work.
The full policy is available at bulleitgroup.com/generative-ai/.
PR professionals seeking deeper understanding of AI governance and responsible deployment in communications can explore AI for PR & Communications or consider the AI Learning Path for Public Relations Specialists.
Your membership also unlocks: