Big Law 2025: The 10 AI and Legal Tech Moments That Mattered
AI moved from hype to hard lessons this year. Courts drew lines. Firms made big bets. And several lawyers learned the limits of "move fast."
If you work in legal, this is the signal through the noise - what happened, why it matters, and what to do next.
The top 10 stories
- 1. First-ever Aussie lawyer sanctioned for AI use
Historic, but not in a good way. A Victorian practitioner became the first in Australia to face sanctions for relying on AI in court without proper checks. The takeaway: disclosure and accuracy are non-negotiable. - 2. Melbourne firm busted using AI-fabricated citations
Ghost citations made it into filings. The court noticed. Basic guardrails like source verification and human review would have avoided the mess. - 3. Solicitor, 2 counsel referred to regulator for AI use
More referrals, more scrutiny. Using AI-prepared material without verifying facts or citations is now a fast path to a regulator's desk. - 4. AI discrimination lawsuit against Workday extends to collective action
Claims about bias in AI-driven hiring tools will be tested at scale. Legal teams advising employers and vendors should expect discovery on data, model behavior, and mitigation practices. - 5. Clayton Utz the latest BigLaw firm to onboard Harvey
Enterprise AI moved mainstream in BigLaw. Firm-wide adoption means new workflows, internal policies, and clear rules for confidentiality and output review. Harvey is now part of the toolset in Australia's top tier. - 6. Judge rips lawyer for submitting AI-generated material in murder trial
Criminal matters demand precision. Error-filled AI outputs in a serious case drew a sharp rebuke. Expect judges to ask, "Who checked this?" - 7. Reprimand for principal lawyer for using AI in estate litigation
Court guidance exists for a reason. Breaching it - even in civil matters - triggers consequences. Compliance now includes tech policy. - 8. AI ban in NSW courtrooms bends to pressure
A blanket ban met pushback and was softened with clearer allowances. Courts are moving toward permission with conditions: disclosure, verification, and responsibility stay with the lawyer. - 9. ChatGPT blunder sees lawyer referred to regulator
Non-existent quotes and citations are still appearing in filings. If you use public models, you need a verification workflow and a rule: no unchecked AI outputs in court documents. Ever. - 10. A Clifford Chance partner on how AI has transformed the legal workforce
Automation is changing how matters are staffed and delivered. The firms winning on AI are training people, rethinking process, and setting clear quality bars.
What this means for your practice
Courts will tolerate the tool, not the shortcuts. If AI touches your work, you own the outcome.
Clients are asking tougher questions about confidentiality, bias, and provenance. Your answers need to be specific - model choice, data handling, review steps, and audit trails.
Action checklist for 2026
- Publish an AI policy that covers disclosure, approved tools, and prohibited uses.
- Require human review and source verification for all AI-assisted content. No exceptions for court filings.
- Log prompts and outputs for high-stakes work. Keep a clean audit trail.
- Use models with enterprise safeguards. Disable training on your data and set retention limits.
- Run bias and accuracy spot-checks on recurring AI tasks (summaries, research, drafting).
- Train everyone - partners first. Focus on citation hygiene, confidentiality, and court-specific rules.
- Do vendor due diligence: data flows, security, indemnities, and model update policies.
- Assign an AI review panel for complex matters and sensitive jurisdictions.
Bottom line
AI is now part of legal work. The firms that win pair speed with discipline: clear policies, tight review loops, and tools fit for purpose.
If your team needs structured upskilling and practical guardrails, explore curated options by role here: AI courses by job.
Your membership also unlocks: