Think your staff are using AI well? Think again
Date: 24 Jan 2026
Across New Zealand, AI is in the strategy slides, the policies, and the training calendars. Licences are live. Boxes are ticked. Yet the value isn't showing up in the work.
This isn't a tooling problem. It's a skills, work-design, and management problem. And it's fast becoming one of the biggest capability gaps in the workplace.
The proficiency bar moved - most people didn't
Last year, "AI proficiency" meant knowing the basics and writing prompts. Today, the bar is different: build AI into meaningful, value-adding tasks every week.
By that standard, the numbers are blunt. 97% of employees use AI poorly or not at all. Only 2.7% are real practitioners, and 0.08% are true experts. Most people are stuck as experimenters (69%) or have checked out as novices (28%).
Time savings paint the same picture. 24% save no time. 44% save less than four hours per week. Only 6% save 12+ hours - the kind of impact leaders expect to offset labour shortages and rising costs.
The use case desert
People aren't stuck because they can't prompt. They're stuck because they don't know what to use AI for in their own job.
26% have zero work use cases. 60% only have beginner ideas. Of 4,500 use cases reviewed, just 15% look likely to generate ROI. 85% of workers have beginner or no use cases. A quarter never use AI for work, and 40% say they'd be fine never touching it again.
They can tidy an email. They can summarise a doc. They can't see how AI could reshape a recruitment funnel, a complaint workflow, a reporting cycle, or a contract backlog - where the real time and money live.
What people actually do with AI
- Google search replacement - 14.1%
- Draft generation - 9.6%
- Grammar and tone editing - 5.7%
- Basic data analysis - 3.8%
- Code generation - 3.3%
- Ideation and brainstorming - 3.2%
- Meeting notes/support - 2.7%
- Document summarisation - 2.0%
- Learning and skill development - 1.6%
- Task and process automation - 1.6%
Writing and research dominate, but in beginner form: copy tweaks and quick info grabs. High-impact areas - data analysis, code, customer service, operations - are underused.
Result: AI is acting like a convenience layer, not a productivity engine.
The executive optimism gap
Leaders think AI is going great. Frontline staff don't.
Executives report clear policies (81%), accessible tools (80%), formal strategies (71%), and widespread adoption (48%). Individual contributors disagree across the board: clear policy (28%), accessible tools (39%), formal strategy (32%), widespread adoption (8%).
This isn't a minor perception quirk. It creates complacency at the top and cynicism at the bottom - a sure way to stall transformation.
Individual contributors: least supported, most affected
ICs do the most repetitive, automatable work. They get the least support.
- Tool access: ICs 32% vs C-suite 80%
- Training: ICs 27% vs C-suite 81%
- Tool reimbursement: ICs 7% vs C-suite 63%
Only 7% of ICs say their manager expects daily AI use. Manager support for ICs has dropped 11% since May 2025. For NZ sectors heavy on frontline knowledge work - healthcare, education, government services, retail - this is upside down.
Where industries and functions stand
By industry (proficiency out of 100): Technology 42; Finance 36; Consulting 35; Manufacturing 34; Media 33; Real estate 32; Food & beverage 29; Education 29; Healthcare 28; Retail 27.
By function: Engineering/Tech 41; Strategy 39; Sales/BD 37; HR 37; Marketing 36; Finance/Legal 35; Product 34; Operations 32; Customer service/Support 27.
Even in obvious areas, most people aren't using AI where it matters: 54% of engineers don't use it for code or formulas. 56% of marketers don't use it for first drafts. 87% of product managers don't use it for prototypes. Scale that across NZ teams and the missed upside is clear.
Training and investment: helpful, but not enough
Since March 2025, more companies have policies and guidelines, and tool investment ticked up. Those moves help.
- Company AI strategy → 1.6x higher proficiency
- Tool access → 1.5x
- Training → 1.5x
- Manager expectations → 2.6x
Even so, trained employees still average 40/100 proficiency. Why? Training is about tools and prompts, not workflows and outcomes.
What NZ HR, Execs, and Support leaders should do next
- 1) Change what you measure
- Track time saved per role, not just licences and logins.
- Rate use case maturity by function.
- Measure the share of core processes redesigned with AI.
- Tie to outcomes: cycle time, quality, error rates, CSAT.
- 2) Make use case development a core management duty
- Require each manager to document 3-5 AI use cases per role.
- Build function playbooks for recruiters, contact centres, analysts, payroll.
- Reward managers for redesigning workflows, not attending training.
- 3) Bridge the IC gap
- Standardise access to approved tools across levels.
- Introduce fair reimbursement policies; stop shifting costs to ICs.
- Set clear expectations for safe experimentation and weekly use.
- 4) Train around workflows, not tools
- Teach teams to map their workflows, find bottlenecks, and test AI on them.
- Build evaluation habits: check outputs, manage errors, refine steps.
- Move to ongoing learning: internal clinics, peer groups, role-based paths. See role-based course options at Complete AI Training and certification tracks like AI Automation.
- 5) Close the executive awareness gap
- Run skip-level forums focused on barriers and real use cases.
- Have leaders shadow staff using AI on actual work, not demos.
- Report underuse and misuse alongside wins - no vanity metrics.
- 6) Accept the bar will keep rising
- Make AI capability part of job profiles and progression.
- Refresh playbooks quarterly as tools and models improve.
- Fund continuous upskilling, not one-off events.
Quick wins by function (next 30 days)
- Customer Support
- Build AI-assisted triage to route intent, priority, and sentiment to the right queue.
- Auto-generate draft replies from knowledge base articles; agent edits and ships.
- Run AI QA on a random sample of tickets for policy, tone, and accuracy checks.
- Human Resources
- Standardise JD templates and AI-check for bias and clarity.
- AI-screen for minimum criteria and red flags; recruiters make the call.
- Generate structured interview guides from role competencies; enforce consistent scoring.
- Executives & Strategy
- Automate first-draft KPI packs with variance commentary pulled from source systems.
- Generate board paper summaries and risk heatmaps; owners validate.
- Scenario test cost, demand, and staffing plans with transparent assumptions.
What to stop doing
- Counting licences and logins as "success."
- Running "prompt 101" sessions without workflow redesign.
- Leaving reimbursement to manager discretion.
- Ignoring shadow tools because the policy looks good on paper.
The real question for 2026
Global usage stats can look impressive. Underneath, 85% of the workforce still lacks a value-driving use case, and 25% don't use AI for work.
The question for NZ organisations isn't "Do people have access?" or "Did they do the training?" It's this: have we helped our people move AI from a clever helper at the edges to a core part of how value is created - and are we measuring it honestly?
That's no longer an IT question. It sits with HR, with operations, and with leaders who own outcomes. The window to get ahead is closing.
Your membership also unlocks: