AI is sending more HR work to the Fair Work Commission. Here's how to cut the noise
AI was pitched as a time-saver for HR. In practice, it's showing up in Fair Work Commission (FWC) matters as a ghost-writer of claims, a black box behind termination decisions, and a magnet for regulatory scrutiny.
For teams juggling bargaining, restructures and hiring, the question is fair: are we saving time, or just creating a new stream of avoidable disputes and admin?
The new workload AI is creating
In one recent case, a Sydney worker tried to run an unfair dismissal claim almost two-and-a-half years after resigning from Electra Lift Co. He said he relied on ChatGPT, which told him to file. The FWC found there was no dismissal at all and called the application "hopeless", noting it wasted everyone's time.
Employment lawyers say they're seeing more of these AI-styled filings: long, legal-sounding, and wrong on basics. HR still has to brief counsel and respond properly, even when the claim should never have made it past the first screen.
Algorithms in HR decisions raise the stakes
On the other side, AI is quietly shaping the decisions employers must defend. Screening tools, HRIS platforms and "potential" or "at risk" scoring systems are feeding into hiring, performance and redundancy calls. A federal inquiry has pushed for treating AI used in employment decisions as "high-risk", with stronger transparency and consultation obligations.
That means HR leaders could be asked to explain not just why a role went, but how the algorithm scored candidates, what data trained it, and whether anyone checked the outputs. One leading practitioner called it a "very dangerous game" to lean on AI for termination decisions. Bias or errors can quickly turn into unfair dismissal or discrimination claims.
Recruitment bias: the test case waiting to happen
Australian researchers are already flagging risk. A University of Melbourne study led by Dr Natalie Sheard found AI video interview tools struggled with diverse accents, showing error rates up to 22% for some non-native English speakers. The study warned candidates with accents or speech-affecting disabilities could be disadvantaged, with little visibility into how rankings were made.
Roughly 30% of Australian employers are estimated to use AI recruitment tools, and that number is growing. No AI-related discrimination case has run to judgment yet, but public sector automation missteps show how fast experiments become legal problems. When a case lands, HR will be assembling data trails, audit logs and policies to prove decisions were lawful.
FWC and regulators push back
The FWC has published an AI transparency statement confirming only human members make decisions and cautioning parties against using generative AI for legal advice. Expect more detail from government as "high-risk" rules mature.
Right now, the impact is practical: more access-to-information requests about how tools work, tougher union questions during consultation, and higher expectations that employers can explain and audit AI systems. If you're defending a decision, assume you'll need to show your work.
For official guidance and processes, see the Fair Work Commission.
The pincer movement squeezing HR
AI helps HR spot pay equity gaps, predict flight risk and automate hiring steps. Used well, it adds real value. But generative AI is also making it easier for weak claims to hit your inbox, while algorithmic inputs give employees a new angle of attack.
The result is pressure from both sides: more, longer, AI-written applications, and deeper scrutiny of any decision touched by an algorithm. Are we improving fairness and efficiency-or just creating more work?
What to do now: practical moves that reduce risk and waste
- Treat AI in HR as "high-risk" by default. Assume scrutiny. Document the purpose, data sources and oversight before rollout.
- Demand explainability from vendors. No "black box" for hiring, performance or termination decisions. Keep humans firmly in charge.
- Keep an internal register of AI-enabled tools. Log owners, use-cases, training data, version history and known limitations.
- Run bias and accuracy audits on a schedule. Test for protected attributes, accents, disability, age and gender impacts. Record the results and fixes.
- Be transparent with staff and candidates. Say where AI is used in recruitment, performance and restructures-and where it is not.
- Train managers on the limits of generative AI. Internally and for responding to AI-written claims. Flag common errors and jurisdiction traps.
- Insist on human-justifiable decisions. Any termination or major call must stand on human reasoning, with AI as one data point-not the verdict.
- Tighten procurement and contracts. Require bias testing rights, audit access, data retention limits and clear accountability from vendors.
- Prepare for discovery. Keep audit logs, data dictionaries, scoring rubrics and change notes. You may need them in the witness box.
- Consult early on restructures involving AI inputs. Bring unions and employee reps into the conversation before decisions are locked in.
- Triage claims fast. Spot AI-generated submissions, focus responses on jurisdiction and facts, and use early conciliation to contain cost.
Bottom line
There's no shortcut around sound process or real legal advice. AI won't shield either side in the FWC, and it can make bad calls look polished.
If you govern AI like it's high-stakes, you'll keep the benefits and cut the noise. If you don't, expect more admin, more disputes, and a costly education in public.
Want structured upskilling for your team? Explore practical courses on AI use, policy and risk at Complete AI Training.
Your membership also unlocks: