"Preserving Human Voices and Faces": Why Pope Leo XIV's Message Matters for PR and Communications
Pope Leo XIV closed the Week of Prayer for Christian Unity in Rome with a clear warning: do not let technology - especially AI - drown out human voices, judgment, and creativity. His message for the 60th World Day of Communications centers on one idea: the challenge is not technological, but human. Protecting faces and voices ultimately means protecting ourselves.
The message was released Jan. 24, on the feast of St. Francis de Sales, patron of journalists. The observance is set for May 17, the Sunday before Pentecost, under the theme "Preserving human voices and faces." For anyone leading PR and communications, this is more than a reflection. It's a policy brief for the year ahead.
The risks he flagged - and what they signal for comms teams
Engagement-first algorithms box people into easy consensus and outrage, punishing thoughtful expression. That erodes listening, weakens critical thinking, and deepens polarization - exactly what trust-building work is trying to counter.
A naive reliance on AI as an all-knowing "friend" quietly trims our capacity to think, create, and interpret meaning. If machines take over text, music, and video, masterpieces become training data, and audiences get fed "unthought thoughts" with no authorship or love.
Bots and virtual influencers can tilt debates and shape choices. Anthropomorphized chat tools can mimic affection and exploit the need for connection, especially among vulnerable people.
AI hallucinations and synthetic media make facts harder to verify, raising the stakes for journalism and brand integrity. With a handful of companies holding massive datasets and attention, subtle influence can rewrite memory - including institutional history.
What to do now: a practical playbook for PR and communications
- Label synthetic and AI-assisted content clearly, every time. Use content provenance standards (see C2PA) and keep human bylines visible.
- Protect authorship and consent. Lock policies for voice, face, image, and likeness; ban deepfakes; require written permissions; defend copyrights.
- Adopt a human-in-the-loop rule. No AI-generated copy or visuals go live without human review - especially in sensitive, safety, health, legal, or faith-related topics.
- Recalibrate metrics. Prioritize accuracy, clarity, and audience trust over raw engagement. Track corrections, source transparency, and sentiment stability as success signals.
- Institute source verification protocols. Require primary sources, cross-checks, and on-the-ground confirmation where possible; flag uncertain claims, or hold the story.
- Set a bot and automation policy. Disclose any automated accounts or assistants; forbid fake engagement and undisclosed synthetic personas.
- Prepare a deepfake response kit. Prewrite holding statements, publish a verification hub, and designate third-party validators to confirm audio/video authenticity.
- Audit creative workflows. Keep your team generating first drafts and strategic ideas; use AI for support work (summaries, transcription, variants), not to replace voice.
- Build data and model governance. Track which tools touch which assets, set retention and privacy rules, and document approvals for AI use.
- Invest in media and AI literacy. Train teams and spokespeople to question outputs, check biases, and explain how AI is used in your content pipeline.
Policy anchors to put in writing
- Transparency is non-negotiable: "Content generated or manipulated by AI must be clearly labeled."
- Editorial independence over algorithmic incentives: do not let "a few extra seconds of attention" override professional standards.
- Information is a public good: protect sources, include stakeholders, and keep quality high.
Why this is good strategy, not just ethics
Trust compounds. Clear labels, careful sourcing, and real human voices create a signal audiences can rely on when feeds are filled with noise. That signal is your moat.
The point is not to stop innovation, but to guide it with responsibility, cooperation, and education. Tools should serve the team, not replace its judgment.
Next steps you can take this quarter
- Publish a short AI disclosure and provenance page on your site; link it in press footers and newsroom posts.
- Run a "red team" drill on a deepfake scenario involving your CEO or a key spokesperson.
- Stand up a two-tier review for sensitive communications: factual verification and meaning/impact review.
- Launch a lightweight training track for your staff and spokespeople on AI, prompts, and verification. If helpful, explore curated learning for comms roles here: AI courses by job.
The message is simple and demanding: keep people at the center. Use powerful tools, but do not outsource your voice, your face, or your responsibility.
Your membership also unlocks: