AI is the new workplace fault line
Leaders are pushing hard on AI. Employees aren't convinced. That tension is now a real execution risk, not just a culture issue.
A new report from Checkr-based on 3,000 workers split evenly between managers and employees-shows a widening gap in pressure, expectations, usage, and trust. If you manage a team, this gap is yours to close.
The numbers you can't ignore
- Pressure: 64% of managers feel pressure to adopt AI to stay competitive. Only 38% of employees feel the same, while 36% feel no pressure at all.
- Ownership: Nearly 40% of managers say leadership is driving AI. But 34% of employees don't know who owns it.
- Expectation setting: 58% of managers see AI as an unspoken performance requirement. Just 29% of employees agree, and 37% are unsure what's expected.
- Actual usage: 45% of managers believe people are using AI regularly. Only 18% of employees see it that way.
- Trust: 40% of managers often or almost always trust AI outputs. 59% of employees feel the opposite.
Why the gap exists
Managers are automating admin work and using AI to speed decisions. That's visible leverage. Employees save time in spots, but it often creates rework, uncertainty, or quality checks that cancel the gains.
Add unclear ownership, vague policies, and tool sprawl, and you get hesitation. People don't resist change-they resist confusion and risk without support.
Your playbook to close the gap
- Name an owner: Assign a single accountable leader (with HR and Legal support) for AI policy, tooling, and rollouts. Publish the org chart for decisions and escalations.
- Set explicit expectations: Write what "good" looks like. Where AI is required, optional, or prohibited. Include examples and sample prompts.
- Define the first 5 use cases: Pick low-risk, high-volume tasks (summaries, data cleanup, draft emails, meeting notes, SOP creation). Avoid expert-only tasks early on.
- Create "do not use AI for" rules: Sensitive data, regulated outputs, final client deliverables without review, or decisions affecting pay and performance.
- Train plus tool access: Give step-by-step tutorials, approved prompts, and sandbox environments. Remove paywalls and login hurdles.
- Measure outcomes, not logins: Track cycle time, error rates, rework, customer satisfaction, and time reallocated to higher-value work.
- Review before you rely: Require human review for facts, math, compliance, and tone until quality is proven with evidence.
- Incentivize learning: Recognize clean wins and well-documented failures. Reward teams that share playbooks the rest can reuse.
- Start with pilots: 6-8 week sprints, 1-2 teams, clear baselines, and a weekly demo. Roll out only after proof.
- Close the loop weekly: Share what changed, what's working, and what's next. Confusion dies when communication is boringly consistent.
Trust is earned, not mandated
People trust what they understand and can verify. Show real examples, side-by-sides with and without AI, and the review steps that catch errors. Normalize "trust but verify."
Also, show failure cases. When leaders share where AI went wrong-and how you caught it-teams learn it's safe to test and tell the truth.
Guardrails that prevent messes
- Data: No sensitive data in prompts. Use approved tools with enterprise controls.
- Attribution: Mark AI-assisted content and keep edit histories.
- Quality checks: Fact-check, source-check, and policy-check before publishing.
- Escalation: Fast path for bias, privacy, or legal concerns. Track and fix root causes.
- Risk baseline: Align with an established model risk approach. If you need a reference, see the NIST AI Risk Management Framework here.
How to talk about AI with your team
- Be honest about pressure: Share the competitive reason, not hype. Tie it to specific business metrics.
- Clarify what won't change: Standards for accuracy, ethics, and accountability stay the same.
- Make it safe to learn: Time-box practice sessions. Pair people. Celebrate small improvements.
- Model the behavior: Use AI in your own work and show your screen. People copy what you do.
Next steps
Pick one team, one process, and one metric. Pilot for four weeks. Publish the playbook. Then scale.
For more practical guidance on rollout, policy, and performance, see AI for Management. If you're setting training and governance with your people team, explore AI for Human Resources.
Your membership also unlocks: