Automated decisions at work: what Australian HR needs to know in 2026
AI and automated decision-making (ADM) are moving into hiring, performance, and day-to-day operations. Employers see efficiency and cost wins; unions see risks in discrimination, surveillance, and job loss. HR sits in the middle, responsible for both outcomes and compliance.
The Australian Government is running a regulatory gap analysis alongside consultation on a National AI Plan. While outcomes aren't due until late in the year, key figures have voiced support for a stronger worker voice in AI adoption. Expect this to be a live industrial relations issue, not a side topic.
The legal baseline you can rely on today
- Unfair dismissal still applies: If an algorithm recommends termination, the employer remains on the hook. The Fair Work Commission will still look for a valid reason and a fair process.
- Discrimination and general protections: Intent isn't needed for discrimination. Even where an algorithm screens candidates, the general protections under the Fair Work Act can capture discriminatory outcomes, and the reverse onus makes a pure-ADM hiring process risky.
- WHS and surveillance duties still bite: A patchwork of state and territory surveillance laws and WHS obligations set limits on monitoring and data use. See guidance via Safe Work Australia.
- Consultation duties: Most awards and enterprise agreements require consultation when major change affects employees. Introducing AI or ADM can trigger that duty, especially if it touches roles, hours, or job security.
Recent commentary suggests some employers skip or narrow consultation on AI. That view has picked up political support. Senator Tim Ayres has backed stronger union voice on workplace AI, and Assistant Minister Andrew Leigh has echoed the point that workers should be partners in deployment - not bystanders.
New and emerging rules
Targeted regulation is already arriving. The statutory Digital Labour Platform Deactivation Code sets a precedent for how automated systems are controlled and reviewed in gig work.
Proposed changes to the Workers Compensation Act 1987 (NSW) would connect WHS risk with surveillance and "discriminatory" automated decisions. They would also grant union officials entry rights to inspect "digital work systems," push for human oversight of key calls, and curb unreasonable performance metrics and monitoring.
Federally, the ACTU is pushing "AI Implementation Agreements" that require consultation before deploying new AI, plus job security commitments, training, and transparency. They've also called for a dedicated AI law and a well-resourced regulator. The Government appears to prefer targeted reforms over a single AI Act, but the direction is clear: more voice for workers and unions in AI rollouts.
Action plan for HR now
- Keep humans in the loop: Require human review for hiring, dismissal, promotion, and performance decisions influenced by AI or ADM. Document that review and the reasons for the final call.
- Run AI risk assessments: Check bias, privacy, WHS, and discrimination risks before rollout. Validate datasets and test for adverse impact across protected attributes.
- Consult early and properly: Map which awards/agreements apply. If AI changes roles, workload, or rosters, trigger consultation and keep minutes, timelines, and materials on file.
- Set clear policies: Publish plain-English policies for AI use, data retention, and workplace surveillance. Explain what's monitored, why, and how employees can raise issues.
- Upskill your people: Provide training on safe, effective AI use and new workflows. Support redeployment and retraining where roles shift.
- Track the rule changes: Monitor Federal and State developments, FWC decisions, and WHS guidance. Assign an owner and review quarterly.
What to watch through 2026
- Findings from the Government's AI gap analysis and the National AI Plan consultation.
- Progress of the NSW workers compensation amendments and any copycat moves in other jurisdictions.
- FWC unfair dismissal and discrimination matters where ADM played a part - early cases will set expectations.
- Award and bargaining movements that hard-code consultation, training, or transparency on AI.
HR doesn't need perfect answers to make progress. You need auditable processes, transparent communication, and a clear line of sight from policy to practice. Do that well, and you'll reduce legal exposure while keeping trust intact.
Looking for structured learning paths your team can use to build AI capability with less risk? Explore role-based options such as AI Learning Path for Training & Development Managers and the AI Learning Path for Safety Engineers.
Your membership also unlocks: