AI anxiety is fueling knowledge hoarding - what HR can do now
A new Adaptavist report shows a sharp rise in employees "gatekeeping" knowledge as AI rolls into the workplace. The survey of 4,000 workers across four countries points to mounting job insecurity and what some describe as "psychological warfare" between colleagues trying to protect their edge.
This is more than a cultural snag. It's a knowledge risk that slows productivity, weakens succession plans, and makes AI adoption harder than it needs to be.
The data HR can't ignore
35% of employees are hoarding knowledge because they fear being replaced. 38% hesitate to train colleagues in areas they see as personal strengths. One in five workers report stress or anxiety tied to AI replacing their job - rising to 40% among Gen Z. Even 26% of workers 55+ share this worry.
Despite the fear, 60% believe their company would struggle to replace their skillset if they left. And three in five worry critical knowledge will walk out the door when colleagues exit.
The hidden cost of hoarding
Hoarding creates a loop: people try to protect their jobs, but the organization becomes fragile. Silos grow. Onboarding slows. Project risk rises when one person becomes a single point of failure.
As Neal Riley, Innovation Lead at The Adaptavist Group, puts it: "Knowledge is a team sport that loses its value the moment it's hoarded or walks out the door."
Action plan for HR
- Reward sharing, not gatekeeping: Bake knowledge-sharing into goals, reviews, and bonuses. Recognize mentors publicly. Track and celebrate documented contributions.
- Make documentation the default: Define "what good looks like" for SOPs, runbooks, and decision logs. Use simple templates and set a minimum doc standard for projects.
- Pair people on critical work: Use shadowing, rotation, and co-ownership for key processes to reduce single points of failure.
- Create sharing rituals: Short show-and-tells, post-mortems, and weekly "what I learned" notes keep valuable context flowing.
- Protect time for it: Block calendar time for documenting and teaching. If it isn't scheduled, it won't happen.
Make AI adoption human-centered
- Be blunt and specific: Explain how AI will change tasks, which roles will shift, and what support people will receive.
- Co-design with teams: Pilot tools with volunteers, gather feedback, and iterate before scaling.
- Commit to reskilling: Offer AI literacy, prompts, data handling, and workflow design training. Tie completion to growth paths and pay where possible.
- Use real change management: Clear milestones, champions in each function, and frequent check-ins. Don't outsource the human side.
- Set guardrails: Define acceptable use, privacy, and quality standards so people feel safe using AI without fear of missteps.
Support high-risk groups
- Gen Z: Address anxiety with honest comms, coaching, and clear skill pathways. Offer peer learning squads.
- 55+ talent: Set up knowledge-transfer sessions, recognize their expertise, and offer flexible paths that value continuity and mentorship.
Protect institutional knowledge now
- Map critical knowledge: Identify the processes, clients, and systems where loss would hurt most; assign owners and backups.
- Build a living knowledge base: Lightweight wiki plus Q&A. Keep it searchable. Kill stale pages on a schedule.
- Embed capture into offboarding: Exit interviews focused on "what to do, who to call, what breaks." Record quick walkthrough videos.
- Use AI to assist, not replace: Internal chat over vetted documents can speed answers, but the source of truth is still your docs.
Metrics to watch
- Documentation coverage: % of critical processes with current SOPs/runbooks.
- Knowledge reuse: Views, searches, and references per document; reduction in repeated questions.
- Time to onboard: Days to productive output for new or rotated staff.
- Mentoring and training: Participation and completion rates for AI and process training.
- Psychological safety signals: eNPS, "safe to speak up" survey items, and anonymous Q&A volume.
- Single-point risk: % of key processes with only one expert.
The bottom line
AI isn't the enemy - secrecy is. Build incentives to share, give people a fair path to upskill, and roll out AI with structure and empathy. Do that, and you'll get a healthier, more resilient workplace where technology supports people, not replaces them.
Helpful resources
Your membership also unlocks: