PRCA publishes AI Green Paper to set the standard for responsible AI in PR and communications
The Public Relations and Communications Association (PRCA) has released its AI Green Paper: Setting the Standard for Responsible AI - a practical guide to help PR teams use artificial intelligence ethically, transparently, and with confidence.
AI is already embedded across the function: research and insight, content production, media monitoring, and evaluation. The Green Paper acknowledges the upside (speed, creativity, scale) and tackles the risks head-on: public trust, reputation protection, and professional standards.
Developed by the PRCA PR and Communications Board Working Group, the paper is authored by Rebecca Broomfield, Stuart Bruce, Stephen King, Catherine Lane and Claire Williamson, shaped by input from agency and in-house practitioners.
The message is clear: AI use is widespread, but formal governance, policy, and confidence are lagging. Legal exposure, data security, and reputational risk sit at the top of the worry list - especially for smaller teams without specialist support.
What the Green Paper covers
- The current state of AI in PR: How practitioners are using generative, applied, and embodied AI today across research, content, media, and measurement.
- Strengthening public trust: Ethics, governance, disclosure, and the profession's role in securing AI's social licence.
- Building confidence and competence: Practical guidance on governance frameworks, training programs, and real-world use cases.
- Future readiness and resilience: Skills shifts, capability building, and avoiding a two-tier industry divided by access and expertise.
Why it matters for PR and comms leaders
AI will make your team faster, but your stakeholders will judge you on judgement, not speed. The Green Paper offers a route to responsible adoption that protects reputation and strengthens trust with clients, media, and the public.
- Governance first: Put policy, process, and accountability in place before scaling tools.
- Human in the loop: Keep editorial control, context, and ethics as mandatory checkpoints.
- Evidence over hype: Prioritise measurable outcomes and audit trails over tool-of-the-week experiments.
Action checklist you can implement this quarter
- Draft an AI policy covering approved use cases, disclosure, data handling, and human review.
- Create an AI risk register (legal, privacy, IP, bias, accuracy) with owners and mitigation steps.
- Set up a lightweight model/content QA process for factual checks, tone, and compliance.
- Introduce vendor due diligence: data retention, training data sources, security, and indemnities.
- Train teams on prompt craft, verification, and disclosure - with role-based playbooks.
- Define evaluation metrics: time saved, accuracy rates, outcomes, and incident reports.
- Plan incident response for AI-related issues: misinfo, leaks, or model errors.
Living document with community input
The Green Paper is built as a living resource with optional reflection questions throughout. Practitioner feedback will shape future updates and a dedicated webinar, ensuring the guidance stays practical as technology, regulation, and expectations move.
Quote: "Artificial intelligence is already transforming how PR and communications professionals' work. The opportunities are real, but so are the risks. This Green Paper sets out how our profession can embrace AI responsibly, with human judgement, accountability and transparency at its core. It is about building confidence, capability and trust, not just efficiency." - Sarah Waddington CBE, PRCA CEO
Practical next steps
- Assign an AI lead and a small cross-functional working group (PR, legal, IT, HR).
- Map current AI use (formal and shadow tools) and close immediate gaps in policy and training.
- Pilot two to three high-value use cases with clear safeguards, then scale what works.
- Document disclosures for AI-assisted content and keep a transparent audit trail.
Further resources
- Public Relations and Communications Association (PRCA)
- UK ICO: Guidance on AI and data protection
- AI Learning Path for Public Relations Specialists
Bottom line
Responsible AI isn't a nice-to-have. It's a reputational shield and a performance system in one. Use the PRCA Green Paper to formalise how your team builds, tests, and governs AI - and keep trust at the centre of your work.
Your membership also unlocks: