AI in Justice: Helpful in the Right Hands, Hazardous in the Wrong

Malaysia's Chief Justice warns AI can boost legal work but bites when misused. Use it for research and admin, keep humans deciding, and protect client data and fairness.

Categorized in: AI News Legal
Published on: Nov 02, 2025
AI in Justice: Helpful in the Right Hands, Hazardous in the Wrong

AI Is Useful - And Dangerous If Misused, Says Malaysia's Chief Justice

Kuala Lumpur, Nov 1, 2025 - At the Malaysia Legal Forum 2025, Chief Justice Datuk Seri Wan Ahmad Farid Wan Salleh drew a clear line: "AI is like a chainsaw - useful in the right hands but dangerous in the wrong ones." His point landed because AI is already showing up in legal research, contract analysis, predictive tools, and even aspects of judicial decision-making.

The message to the profession was direct: AI can help, but it cannot replace the core of legal work. Law is a human project. It relies on judgment, empathy, context, and lived experience - the parts no model can assume responsibility for.

Why this matters to legal teams

  • AI augments, it doesn't decide. Outcomes must remain anchored in human judgment and accountability.
  • Confidentiality and privilege are at risk with public or poorly governed tools. Treat prompts and outputs like work product.
  • Accuracy is not guaranteed. Hallucinations, outdated sources, and mislabeled citations can slip into filings.
  • Fairness concerns persist. Bias in training data can bleed into recommendations and summaries.
  • Transparency will be expected. Courts and clients may require disclosure of where, when, and how AI was used.

Practical guardrails you can implement now

  • Adopt a firm-wide AI policy: approved tools, use cases, data handling rules, and review standards.
  • Use secure environments. Keep client data off public models unless protected by enterprise-grade contracts and controls.
  • Require human-in-the-loop review for any research, drafting, or analysis that informs advice or submissions.
  • Verify every citation. Demand source links, check authorities, and keep an audit trail.
  • Limit inputs. Strip identifiers, use hypotheticals, and redact sensitive facts wherever possible.
  • Vendor due diligence: model provenance, data retention, encryption, audit logs, and bias testing.
  • Training and drills: prompt hygiene, red-teaming, and scenario tests tied to your matters and playbooks.

Where AI helps without crossing the line

  • First-pass research and case summaries with mandatory human verification.
  • Clause extraction and issue spotting in contracts, followed by lawyer review.
  • Chronologies, document review triage, and transcript prep to save time for higher thinking.
  • Workflows that generate options, not answers - you decide what stands.

What the Chief Justice emphasized

Law is not just rules and precedents processed by an algorithm. It demands the kind of judgment that accounts for context, human stories, and the public's trust that justice is done - and seen to be done.

The call is simple: embrace useful tools, keep core values intact, and pursue efficiency without sacrificing fairness, accessibility, and integrity.

About MLF 2025

The Malaysia Legal Forum 2025, now in its third edition, convened judges, legal practitioners, and corporate counsel to address current legal challenges and technological change. The event was organised by Thomson Reuters, supported by the Legal and Business Academy of Malaysia, with the AIAC as strategic partner.

Move forward with intent

Adopt AI where it removes grunt work and sharpens your thinking. Keep humans in charge where rights, liberty, and reputation are on the line.

If you're setting policies or training your team, a structured skills path can help. See curated options by role here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide