FOI reveals NDIA using machine learning to draft NDIS plans, with humans making final calls

NDIA is using machine learning to suggest draft NDIS budgets, with humans making final calls. A Copilot trial sped up internal work, with guardrails and training to keep it fair.

Categorized in: AI News Government
Published on: Nov 13, 2025
FOI reveals NDIA using machine learning to draft NDIS plans, with humans making final calls

NDIA quietly uses machine learning for NDIS draft budgets - what government teams need to know

Documents released under FOI show the National Disability Insurance Agency has been using machine learning to prepare draft budget plans (Typical Support Packages) for first-time NDIS participants. The algorithm pulls from key information in a participant's profile to recommend a starting budget. NDIA delegates make the final call.

Separately, 300 staff trialled Microsoft Copilot over six months from January last year for email, meetings, and other internal work. Generative AI was not used for participant plans.

What the tech does - and doesn't do

The NDIA defines machine learning as a subset of AI that learns from data to make predictions or recommendations. In practice, the model speeds up the initial analysis and proposes a budget figure; it does not approve plans. Decisions sit with human delegates.

The agency's April 2024 policy states AI tools must not access participant records unless the CIO expressly authorises it and it complies with the NDIS Act. A spokesperson reiterated that AI is not used for systems that interact directly with participants or providers, and not for funding or eligibility decisions.

Productivity gains from the Copilot trial

Staff reported a 20% reduction in task completion times and a 90% satisfaction rating. Teams found it useful for interpreting NDIA policies and producing concise summaries and drafts. Hearing-impaired staff highlighted live transcription in meetings as a meaningful improvement.

Risks raised by staff and experts

Some staff drew a line to the robodebt findings and voiced concerns about any drift toward automated decision-making or cuts to staffing. The end-of-trial report also flagged accidental data exposure as a risk, to be managed through access controls, audits, and staff training.

Researchers cautioned that machine learning often struggles with complexity and context. It can be hard to see which data points drive a recommendation, or how bias enters the model. That's where a human in the loop must apply judgment-and be trained to know when to override a suggestion.

A disability advocate and NDIS participant stressed that plans affect real, daily needs: getting to the bathroom, leaving the house, or whether a wheelchair fits without causing pain. The ask is simple but non-negotiable: planners need time, training, and permission to customise plans to the person, not the average.

Policy signals across the APS

The federal government released a whole-of-government plan to roll out generative AI tools, training, and guidance to public servants. The direction is clear: AI will be standard kit, with guardrails.

What this means for government teams

This is a preview of how AI will land in high-stakes public services. Expect models to handle initial synthesis and templated outputs while humans keep authority and accountability. The practical work now is governance, training, and measurement-so productivity gains don't come at the cost of fairness or trust.

Practical steps you can implement now

  • Keep humans in charge. Treat model outputs as draft inputs, not decisions. Require written justification when accepting or overriding recommendations.
  • Design for edge cases. Build explicit checks for complexity, ambiguity, and "doesn't fit the box" situations. Route these to senior reviewers.
  • Limit data access. Apply least-privilege access, data loss prevention, and regular audits. Test with synthetic or de-identified data where possible.
  • Tackle automation bias. Train planners to spot when they're leaning on the model. Use prompts like "What would change this decision?"
  • Track quality, not just speed. Measure outcomes for participants, escalation rates, overrides, and variance across cohorts-not only time saved.
  • Document model behavior. Maintain clear summaries of inputs used, limitations, and known failure modes. Publish plain-language guidance for staff.
  • Enable accessibility. Keep features like live transcription and summarisation-these help staff and participants engage more fully.
  • Close the feedback loop. Make it easy for staff to flag bad recommendations and feed that back into model tuning and policy updates.

Guardrails that support trust

  • Legal basis: Align data use and decision rights with the NDIS Act and agency policy.
  • Transparency: Tell participants where AI assists and where humans decide. Offer a clear path to contest and escalate.
  • Risk reviews: Run pre-deployment impact assessments focused on bias, accessibility, and service quality-not just cybersecurity.
  • Incident readiness: Define what counts as an AI incident (e.g., data exposure, systematic under-provision) and how you'll respond.

Why this matters

AI can reduce paperwork and speed up service, but public value is earned through care and accountability. For programs like the NDIS, the benchmark is simple: faster is good; fair and person-centred is mandatory.

Further reading and training


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)