Australia's AI Divide Is Already Here - New National Survey Shows Who's Missing Out

Almost half of Australians have tried generative AI, but use skews by age, education, and job type. Focused skills programs, safer defaults, and fair access can narrow the divide.

Published on: Nov 05, 2025
Australia's AI Divide Is Already Here - New National Survey Shows Who's Missing Out

Australia's AI divide: what the data says and what to do next

Generative AI moved from novelty to normal in less than two years. It's now built into search, office suites, and creative tools, blurring the line between what's real and what's synthetic.

A new nationally representative survey shows almost half of Australians have used these tools, but adoption is uneven. That gap mirrors long-standing differences in access, affordability, and digital skills - and it's starting to harden into an AI divide.

The numbers at a glance

  • 45.6% of Australians have recently used a generative AI tool.
  • Among users: 82.6% use text generation, 41.5% image generation, 19.9% code generation.
  • Paying users: 13.6% subscribe to premium tools; strongest among 18-34-year-olds (17.5%) and 45-54-year-olds (13.3%).
  • Age gap: 69.1% of 18-34s use AI vs 15.5% of 65-74s.
  • Education gap: bachelor's degree holders (62.2%) vs those who didn't finish high school (20.6%); those leaving at Year 10 are among the lowest users (4.2%).
  • Occupation gap: professionals (67.9%) and managers (52.2%) vs machinery operators (26.7%) and labourers (31.8%).
  • Language: people speaking a language other than English at home report higher use (58.1%) than English-only speakers (40.5%).
  • Social use: only 8.6% use chatbots for connection, but that rises in remote areas (19%) vs metro (7.7%).

Why this matters

Uneven adoption maps closely to existing digital inclusion gaps. If left alone, it can widen differences in learning, job mobility, and access to services.

Lower-skilled users face higher exposure to deepfakes, fraud, and misleading content. Overreliance without checks can lead to poor decisions - especially in health, finance, and public services.

What governments can do now

  • Fund AI literacy tied to digital inclusion programs. Prioritise older Australians, low-income households, remote communities, and people who left school early.
  • Support local delivery: public libraries, TAFEs, community centres, and First Nations organisations as trusted training hubs.
  • Issue procurement guidance so public agencies deploy AI with safety-by-default, transparency, and multilingual access.
  • Run targeted anti-scam campaigns using real AI examples (voice, video, fake invoices) and simple verification steps.
  • Track the divide: report usage and benefits by age, region, education, and occupation alongside the Australian Digital Inclusion Index.

For business and IT leaders

  • Map tasks to use cases (drafting, data clean-up, summarisation, translation). Train staff on prompts, verification, and privacy basics.
  • Provide approved tools and centralised guardrails: data retention off by default, safe model choices, logging, and access tiers.
  • Invest in people who are least likely to opt in: on-shift training for operators, front-line staff, and casuals.
  • Measure outcomes and equity: time saved, error rates, customer impact, and who benefits inside the organisation.

For educators and training providers

  • Teach AI use alongside critical checks: cite sources, cross-verify with known references, and flag hallucinations.
  • Assess the process, not just outputs: show drafts, prompts, and reasoning steps.
  • Offer short, practical modules for Year 10 leavers, mature-age learners, and language-diverse cohorts.

For developers and product teams

  • Design for constraints: low bandwidth, mobile-first, clear pricing, and offline-friendly workflows where possible.
  • Make safety visible: explain model limits, label synthetic media, and provide one-click fact checks and source trails.
  • Support languages used at home in Australia and reduce English-only friction in onboarding and help.

For individuals and teams

  • Start with routine tasks: summarise long emails, draft minutes, translate instructions, or outline code snippets.
  • Double-check anything high-stakes. Verify names, numbers, and citations with a second source.
  • Learn the scam patterns: urgency, payment changes, voice clones, and "private" links. Confirm through a separate channel.
  • Use free tiers to test value. Only pay once the tool saves time or improves quality consistently.

Key signals to track in 2025

  • Adoption by age, education, and occupation, especially among non-office workers.
  • Paid vs free usage and whether access clusters around higher-income groups.
  • Growth in social/companion chatbot use in remote areas and among isolated groups.
  • Scam reports involving synthetic media and changes in digital confidence among low-skill users.
  • Uptake among people who speak a language other than English at home and quality of multilingual features.

Learn more and take next steps

For international context on consumer use, see the UK media regulator's latest findings on generative AI adoption here. For policy principles, review UN guidance on AI and inclusion here.

If you're rolling out training by job role or need quick, practical modules, explore courses by job and new AI courses.

Bottom line

Australia's AI divide is visible in the data and tracks existing digital gaps. With focused skills programs, safer-by-default tools, and fair access, the benefits can reach people in every postcode - not just the most connected.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)