Judges across Asia-Pacific set guardrails for ethical AI in courts

UNESCO and UNDP gathered Asia-Pacific judges in Bangkok to weigh how AI can help courts without eroding fairness. Clear guardrails, testing, and transparency topped the takeaways.

Categorized in: AI News Legal
Published on: Dec 19, 2025
Judges across Asia-Pacific set guardrails for ethical AI in courts

Strengthening Judicial Capacity on the Ethical and Responsible Use of Artificial Intelligence

December 17, 2025 - Bangkok, Thailand

UNESCO and UNDP convened 27 judges from 13 Asia-Pacific jurisdictions for a three-day training on AI and the rule of law, supported by the Thailand Institute of Justice (TIJ). The focus: where AI can responsibly support court operations, and where safeguards are essential to preserve fairness, independence, and public trust.

Why this matters for the bench

Opening discussions underscored a simple tension: AI can streamline court work, but justice must remain human-centered. Speakers emphasized that administrative gains mean little if bias, opaque logic, or undue external influence creep into judicial processes.

Participants examined concrete uses of AI in court administration, legal research, and case management-alongside concerns over bias, transparency, and judicial independence. Without oversight, AI can amplify existing inequalities and damage confidence in outcomes.

Risks are now visible

"Only three years ago, assessments of the application of AI tools in dispute resolution proceedings were mostly speculative. Now, with real-world examples of AI use, the associated risks have become visible, making this training session timely and useful," said Takashi Takashima, a counsellor at Japan's Ministry of Justice.

Through live use cases, judges weighed gains in access to justice and consistency against risks to due process and equity. Leah Verghese, Research Manager at DAKSH, stressed the region's diversity: "Really needs to be considered when we talk about AI; AI should not make existing disparities worse."

Peerapat Chokesuwattanaskul, Assistant Professor of Law at Chulalongkorn University, warned that even accurate correlations can entrench structural discrimination: "Why do we call it bias if a pattern reflects reality? Because even accurate correlations can reinforce structural discrimination." He added: "AI may weigh factors we don't see, such as tone of voice, body movement, and a blink of an eye, and when we don't know how decisions are made, layers of bias multiply."

Evidence integrity is also under strain. With generative AI and deepfakes, records once taken for granted-from video to financial statements-will need verification. That raises administrative and technical burdens for already stretched courts, noted Jon Truby, a visiting AI and technology law researcher at the National University of Singapore's Centre for International Law.

Where AI can help-under guardrails

Judges acknowledged the reality: case backlogs, voluminous records, and limited resources. Used carefully, AI can guide court users through forms and procedures, assist with research, and surface relevant materials in large files-freeing time for analysis and judgment.

The shared view: these benefits are viable only with clear governance, strong testing, and transparent use. Otherwise, efficiency gains risk eroding due process.

Practical safeguards courts can adopt now

  • Human authority: Keep judicial decision-making with judges. Document where, how, and how much AI contributed to any analysis.
  • Governance: Establish an AI oversight committee. Maintain an inventory of tools, classify risks, and approve use cases before deployment.
  • Procurement terms: Require access to documentation, audit logs, error rates, and bias testing. Bar vendors from training on court data. Define data residency and security expectations.
  • Testing and monitoring: Run algorithmic impact assessments. Test with representative datasets. Track disparities across protected groups and audit periodically.
  • Due process: Disclose when AI was used. Give parties a right to challenge AI-assisted analyses. Preserve explanations and logs for the record.
  • Evidence integrity: Implement authentication workflows for audio, video, images, and documents. Use verified chains of custody. Build deepfake detection into standard practice.
  • Data protection: Minimize sensitive data in prompts and inputs. Set retention limits and confidentiality controls suitable for judicial data.
  • Access to justice: If deploying assistants for self-represented litigants, provide plain-language disclaimers, language support, and guardrails that avoid individualized legal advice.
  • Capacity building: Train judges, clerks, and IT teams. Share findings across jurisdictions and update protocols as case law and technologies evolve.
  • Public transparency: Publish plain-language summaries of AI use, oversight mechanisms, and audit outcomes to maintain confidence.

Shared direction

The convening closed with common ground: AI can support judicial reform, but only with governance, transparency, and continuous dialogue. Courts that move deliberately-pilot, test, explain, and revise-will protect independence and fairness while improving service to the public.

For reference frameworks, see the UNESCO Recommendation on the Ethics of AI and the OECD AI Principles.

If your court is planning AI literacy programs for judges and staff, you can explore curated learning options by job function here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide