AI power has a values problem: What PR and communications leaders can take from ABC chair Kim Williams
Kim Williams, chair of Australia's ABC, is bullish on using AI day to day - ChatGPT, Gemini, Perplexity - but blunt about the risks. He warns the tech can become "dangerous and sinister" when those funding and leading it hold "extremely autocratic" views.
His core point cuts to reputation and policy: technology reflects the values of its creators and controllers. If the people steering major AI firms prize control over open debate, your stakeholders will feel it in products, partnerships, and public discourse.
Why this matters for PR and comms
Williams defends democratic debate and warns against censoring opponents. He points to live examples of governments using AI in ways that limit freedom - a reputational minefield for any brand tied to the wrong vendor or model.
He's also clear on copyright: creators deserve to be paid. Australia rejected a text-and-data mining exemption for AI training. If your organization benefits from creative work, non-payment isn't just a legal problem - it's a brand trust problem.
Key takeaways you can use now
- Treat AI as a tool, not a belief system. Williams urges a disciplined approach, not a romantic one. Your policy, not vendor hype, should set the boundaries.
- Interrogate the values behind the model. Ask who funds it, who leads it, and what their views imply for moderation, political neutrality, and accountability.
- Protect creators and your reputation. If AI training used unlicensed content, you'll wear the blowback. Build licensing and attribution into your comms narrative.
- Expect uneven job impact. Williams sees heavier hits in entry-level accounting and law, but a net positive for sharp journalists who use AI to work smarter. Your workforce messaging should reflect that nuance.
Due diligence before you sign with any AI vendor
- Governance: Who's on the board? Any histories of censorship, political activism, or conflicts of interest?
- Training data: Was creative content licensed? What's the audit trail? Will they indemnify you?
- Safety: Bias testing, red-team results, abuse handling, and an appeals process for takedowns or moderation.
- Provenance: Watermarking or content credentials for generated outputs; clear logs for fact-checking.
- Controls: Enterprise features for data isolation, opt-outs from training, and human-in-the-loop review.
- Compliance: Alignment with your jurisdiction's copyright and privacy laws.
Copyright and creator payments: your comms stance
Australia has not created a blanket text-and-data mining carve-out for AI. Williams argues creators "have a right to derive income" and government should enforce it.
- Use licensed datasets or pay for usage. No grey areas in your talking points.
- Secure warranties and indemnities from vendors on training data provenance.
- Bake attribution, payment, and opt-out commitments into public statements and FAQs.
Helpful context: Copyright Agency (AU) * OECD AI Principles
Internal comms: set expectations without panic
- Be specific: where AI will assist (research, drafting, summarization) vs. where human judgment stays mandatory (final edits, sensitive topics, legal review).
- Highlight skills, not roles: Williams expects journalists to get stronger with AI. Apply the same principle to comms - better analysis, faster cycles, tighter quality control.
- Run pilots with guardrails and publish outcomes to build confidence.
Your AI usage policy (fast draft)
- No confidential, personal, or client-sensitive data in public models.
- All AI-assisted content gets human review and source verification.
- Disclose AI assistance where it affects consumer trust or regulatory requirements.
- Respect creator rights: use licensed inputs; track sources; pay where needed.
- Maintain an issues log for bias, errors, and takedown requests; document fixes.
Issues and crisis playbook
- Unlicensed training claim against your partner: pause campaigns, publish your licensing posture, and request vendor proof of rights.
- Model output injects bias or political spin: pull content, explain review controls, and show corrections with citations.
- Government pressure to suppress content: escalate to legal and policy leads; anchor response in free expression and your published standards.
Media and brand partnerships: what to say
- Frame deals as service to the public interest: accuracy, transparency, and fair pay for creators.
- Disclose your safeguards: human oversight, provenance tools, and a clear correction pathway.
- Publish a short "AI facts" page that lists datasets, licensing approach, and moderation principles.
Skill up your team
If you want structured upskilling for PR and comms teams testing AI for research, drafting, and analysis, explore these options: AI courses by job.
Bottom line: Williams' warning isn't anti-tech. It's a call for clear values, disciplined use, and respect for creators. That's exactly the playbook comms leaders need to keep trust intact while AI becomes part of daily work.
Your membership also unlocks: