Eager for agentic AI, anxious about jobs: adoption hinges on clear strategy and upskilling

Employees are eager for agentic AI, but unclear strategy and weak training slow real adoption. HR can close the optimism-anxiety gap with a plan, practice, and honest comms.

Categorized in: AI News Human Resources
Published on: Nov 04, 2025
Eager for agentic AI, anxious about jobs: adoption hinges on clear strategy and upskilling

Agentic AI is exciting. Adoption will stall without basics

Employees are ready for agentic AI. In EY's first Agentic AI Workplace Survey of 1,100 U.S. desk workers, 84% said they're eager to use it in their roles. The problem: many organizations haven't set a clear vision, built training that actually helps, or equipped middle managers to lead mixed human-AI teams.

That gap is slowing adoption, even as people expect improvements in productivity, efficiency, and daily work. Enthusiasm is high. Confidence is not.

The optimism-anxiety gap HR must close

Employees are excited, but they're uneasy about job security and their skills keeping pace. From the survey:

  • 56% worry about their job security working alongside AI agents
  • 51% fear agentic AI could make their job obsolete
  • 61% feel overwhelmed by the constant stream of agentic AI information
  • 54% feel they're falling behind peers at work

There's good news. In organizations with a clearly communicated agentic AI strategy, 92% of desk workers report productivity gains. Where leaders are specific and transparent, adoption follows.

What HR should do now

  • Publish a clear AI strategy-and repeat it often. Spell out where agentic AI will be used, what it won't touch, and the safeguards in place (ethics, data privacy, security, quality checks). Plain language beats slogans.
  • Define the human-AI split for work. Map tasks: what agents do, what people own, and where collaboration happens. Update job descriptions, workflows, and performance expectations. Give middle managers a playbook they can run tomorrow.
  • Stand up real training, not hype. Base-level literacy for everyone. Role-based, hands-on training for teams. Practice labs with safe, real tasks. Make it measurable and tied to outcomes.
  • Create a safe sandbox. Provide approved tools, test data, and guardrails so people can try use cases without risk to customers or compliance.
  • Align incentives and policies. Reward useful AI use (time saved, quality gains, fewer handoffs). Update policies on data use, attribution, bias checks, and human-in-the-loop approvals.
  • Keep communication two-way. Run office hours, publish FAQs, and equip managers with talking points. Address job-security questions early and directly.
  • Manage risk with a framework. Adopt a standard such as the NIST AI Risk Management Framework and bake it into tool approvals, model monitoring, and audits.

Build skills that stick

Employees want to learn, but many don't trust random tutorials-and they shouldn't. The survey found 59% see poor or missing training as a barrier. HR can bridge curiosity to capability with a simple plan:

  • 30 days: Foundations for all staff (terms, safe use, policy), plus 2-3 approved tools with guided exercises.
  • 60 days: Role-based pathways (sales, finance, HR, ops) with task libraries and measurable practice reps.
  • 90 days: Team pilots with specific KPIs (cycle time, quality, error rate), retrospectives, and scale-up criteria.

Support this with communities of practice, "show your work" sessions, and short refreshers as tools update. Avoid generic lectures-people learn by doing on their own tasks.

If you need structured options, see curated AI courses by job function or practical AI certification pathways that focus on real workflows and measurable outcomes.

Equip managers for hybrid teams

  • Clarity: Which tasks should agents start, which need human review, and what "good" looks like.
  • Quality gates: Checklists for data sources, bias tests, and final human sign-off.
  • Coaching: How to spot skill gaps, give feedback on prompts/outputs, and share wins across the team.
  • Escalations: Clear paths for model issues, data incidents, and customer-impact risks.

Measure what matters

  • Adoption: % of employees using approved tools weekly; active use cases per team
  • Productivity: Time saved per task; cycle time reductions; rework rates
  • Quality: Output accuracy; customer satisfaction; compliance findings
  • Confidence: Employee sentiment on job security and AI skills; help-desk trends
  • Risk: Number and severity of incidents; model monitoring alerts; policy exceptions

What leaders need to say out loud

Employees can handle change if they understand the plan. Be explicit: where agentic AI will be used first, how roles evolve, what support is in place, and how success will be judged. Name the protections for people and customers.

As one EY leader put it, transparency energizes teams and improves performance. Another cautioned that weak, unclear training invites bad habits and risky shortcuts. Both points point to the same mandate for HR: set the standard and make it easy to meet.

Bottom line

Optimism fuels ambition. Unease blocks action. HR can turn agentic AI enthusiasm into real performance by getting the basics right: a clear strategy, useful training, empowered managers, and steady communication.

If you're building your curriculum, browse new and updated AI courses or keep an eye on guidance from trusted sources like EY. Keep it practical. Keep it safe. Make it measurable.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide