Only 7% of Australian workers have advanced AI literacy despite widespread adoption, RMIT Online finds

84% of Australian workers use AI on the job, but only 7% have advanced skills. HR leaders are scrambling to build training programs as unsupervised adoption creates data and privacy risks.

Categorized in: AI News Human Resources
Published on: May 04, 2026
Only 7% of Australian workers have advanced AI literacy despite widespread adoption, RMIT Online finds

Australian workplaces racing ahead of employee AI skills

Eighty-four percent of Australian workers now use AI on the job, but only 7% have advanced capability with the technology. This gap between adoption and competence is forcing HR leaders to rethink training strategies before employees expose sensitive data or make costly mistakes.

RMIT Online's workforce survey, presented at the National HR Leaders Summit in Sydney, reveals a skills crisis emerging from consumer-led adoption. Workers are using AI tools before their organizations establish governance or training programs.

The generational divide

Age shapes both confidence and caution. Younger workers show more ambition with AI but sometimes overestimate their abilities. Senior decision-makers, who control strategy and risk management, tend to be hesitant and lack advanced literacy.

This matters because executives set policy. They need to understand ethics, governance, and how AI affects their business.

Where the real risks hide

Many employees jump into AI without grasping the consequences. Data breaches, intellectual property leaks, and privacy violations happen when workers don't know what they're exposing.

Some organizations restrict AI tool access until staff complete training. Once literacy improves, they gradually expand permissions in controlled ways.

Personal device use complicates things further. Employees often use consumer AI apps on their phones or home computers, making traditional security controls ineffective.

Training must be tailored, not generic

A one-size-fits-all approach fails. HR leaders should design training around specific roles, experience levels, and responsibilities. A software engineer needs different preparation than a finance manager.

Start with leadership. Boards and senior executives should train first, establishing governance structures before rolling out company-wide programs. This top-down approach prevents chaotic, unsupervised adoption.

Then address both ends of the spectrum. Frequent AI users need training in risk management and data privacy. Reluctant adopters need encouragement in safe, structured environments.

Communication matters as much as capability

Many organizations unknowingly embed AI in existing tools. Employees don't realize they're using it, which means they're not thinking about risks. Better communication about where AI exists in your systems is essential.

Fear and uncertainty still dominate. Clear, honest information about what AI can and cannot do-and what your organization allows-reduces both reckless use and unnecessary hesitation.

For HR leaders managing this transition, consider exploring AI for Human Resources or a learning path for CHROs to build your own foundation before designing programs for your workforce.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)