AI Energy Backlash at Work: Dos and Don'ts for Leaders
Rising AI use raises energy and water concerns that sap employee trust. This playbook guides comms leaders to address guilt with transparency, efficiency, and ongoing updates.

AI, energy use, and employee trust: a playbook for PR and communications leaders
If you manage editorial or communications teams, you've heard the usual AI objections: it gets facts wrong, it's unclear what happens to data, and chatbots tend to agree with you. Those are solvable with human-in-the-loop checks, enterprise privacy, and prompts that ask for critique.
There's a newer objection you can't just prompt away: the environmental impact of AI's energy and water use. As adoption rises, so does the industry's footprint-and your team is reading those headlines.
The objection driving resistance
Despite hardware getting more efficient, demand from deep research, thinking models, and agents keeps energy needs high. Public comments like "please and thank you" costing real money didn't help perceptions. Data center build-outs are surging, and with them, concerns over electricity and water use.
For some employees, using AI now carries a sense of guilt. If they feel forced to use it, that guilt can turn into pushback, and then refusal.
What you're likely to see on your team
We're already seeing objections tied to net-zero goals in government and city IT groups. Journalists and media staff are raising the issue in conferences and trainings. In one large corporate comms team, 37% ranked energy use as their top AI concern over bias, hallucinations, sycophancy, and privacy.
That's a looming PR problem for any organization that wants to be "AI-forward." If employees think AI is a step backward for climate, adoption and advocacy stall.
Start with the solvable concerns
Keep addressing the foundational issues-because they reduce waste and build trust. Use human review where facts matter. Run AI through enterprise tools, APIs, and privacy settings. Prompt systems to critique output instead of flattering it.
Dos and don'ts for leaders
- Do listen for specificity. Are concerns about AI in general, a specific tool, or local impacts like water use? Precision tells you what to fix.
- Don't deflect. Comparing AI to cars or plastics sounds dismissive. Ask, "What's the AI equivalent of diesel upgrades or recycling-and how do we move there?"
- Do research transparently. Share credible updates on energy, carbon, and water use from major labs and vendors. One major provider reported steep efficiency gains per prompt year over year-context your team will want to see.
- Don't push "use less AI" as the plan. The worst move is avoiding AI where it actually solves the problem. Thoughtful use is better than bans or guilt.
- Do move repeat work into dedicated tools. When a workflow proves its value, build it on the most efficient model you can. Owning the compute bill motivates smart throttling and better design.
- Don't go silent. Keep the conversation going. Consider an internal energy counter that shows usage and efficiency per task over time.
Comms tactics you can deploy now
- Create a message map. Key points, proof, and phrasing for executives, managers, and ICs. Include what you know, what you're testing, and what comes next.
- Draft an internal FAQ. Cover how tools are chosen, what privacy controls exist, where human review sits, and how you're tracking efficiency.
- Add sustainability criteria to vendor reviews. Ask for regional carbon intensity, PUE/WUE, model efficiency metrics, and plans for improvement.
- Publish a quarterly update. Model mix, estimated compute per task, reduction in redundant runs, and examples of "same output, fewer tokens."
- Prep external Q&A. Be ready for: "Is your AI use increasing emissions?" "How do you prevent wasteful use?" "What are you doing about water?"
- Train for efficient prompts and workflows. Encourage critique-oriented prompts, reusable templates, and clear stop rules to avoid unnecessary regenerations.
- Run pilot benchmarks. Compare tasks across models for quality, speed, and compute. Standardize on the most efficient model that meets the bar.
What to say when asked, "Are we making climate worse?"
- We use AI where it increases useful output per unit of compute.
- We standardize on efficient models and reduce redundant runs.
- We pick vendors that disclose energy and water metrics and show a path to improvement.
- We track usage and share progress internally, including where we fall short.
- We design workflows that keep humans in the loop to prevent rework.
Metrics that matter
- Tasks automated and time saved per task
- Average tokens and calls per task (baseline vs. current)
- Model mix by use case (small vs. large models)
- Estimated kWh and emissions using vendor disclosures
- Water-use proxies by region if available
The real risk isn't shadow AI-it's refusal
People feel small next to trillion-dollar companies and government policies. That can turn into "why bother," which slows adoption and kills momentum.
Your job is to channel that concern into efficiency, measurement, and clear updates. If you do, you'll keep trust high and waste low.
Next step
If your PR or comms team needs practical AI workflows, policy templates, and hands-on practice with efficient prompting, explore these resources: