WRC sets ground rules for AI use by litigants: accuracy and disclosure

WRC warns AI can help draft but often gets Irish employment law and citations wrong. Verify every source, keep humans accountable, and consider disclosing limited AI use.

Categorized in: AI News Legal
Published on: Oct 31, 2025
WRC sets ground rules for AI use by litigants: accuracy and disclosure

WRC issues practical guidance on generative AI use by litigants

30 October 2025

The Workplace Relations Commission (WRC) has released guidance for parties who use tools like ChatGPT to prepare submissions or documents for WRC cases. The message is straightforward: AI can help with drafting, but it can also get facts and law wrong - especially in Irish employment and equality matters.

The guidance cites a recent case where a complainant's submission was "rife with citations that were not relevant, mis-quoted and in many instances, non-existent." That episode is now a cautionary tale for anyone relying on AI to produce authorities or summaries without checking them.

Why this matters to practitioners

Generative models produce fluent text, not guaranteed accuracy. If an AI tool fabricates case law or misstates a statutory provision, you own the error in the hearing room.

The WRC also flags that parties may wish to disclose their use of AI. Disclosure won't cure bad sources, but it sets expectations and shows you're not passing machine output off as independent legal analysis.

Key takeaways from the WRC guidance

  • Treat AI as a drafting assistant, not an authority. It can help with structure and initial wording; it cannot replace legal research.
  • Verify every citation, quote, and pinpoint reference. Check against official sources before filing.
  • Mind Irish context. Generic AI outputs often miss Irish-specific employment and equality law or confuse UK/EU sources.
  • Protect confidentiality and data. Don't paste sensitive facts into public AI tools without safeguards.
  • Consider disclosure. If AI was used to prepare text or summaries, be ready to say so and to explain your verification steps.
  • Maintain human accountability. You are responsible for what goes on the record, not the tool.

A low-risk workflow you can use today

  • Start with the issues: list the questions the adjudicator must decide.
  • Pull primary sources from official repositories before drafting.
  • Use AI to outline sections or simplify wording - but keep legal reasoning and selection of authorities human-led.
  • Cross-check every case and statute mentioned by the tool against official databases and your own notes.
  • Recreate key quotes from the source itself; never rely on AI-generated quotations.
  • Preserve a research log noting sources checked and any corrections made to AI output.
  • Where appropriate, disclose limited AI use and the steps taken to verify accuracy.

What the WRC says about its own use of AI

The Commission states it is committed to ethical, responsible and meaningful use of AI, and will be transparent as it explores and integrates these tools. To model that approach, parts of its guidance were drafted with AI to assist with layout and formatting - with human oversight.

Bottom line for hearings

Use AI to speed up drafting, not to shortcut reasoning or research. Verify every authority, stand over your submissions, and consider disclosure where AI assisted.

For the latest from the source, see the Workplace Relations Commission.

If your team is formalising AI use in legal work, a structured skills path helps. See curated options by role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)