AI marks a civilisational shift, Kerala Chief Justice calls for guardrails as deepfakes threaten judicial integrity

AI is a civilisational shift, Kerala HC Chief Justice said-use it with principled oversight that protects the Constitution. Set guardrails: disclosure, evidence checks, and audits.

Categorized in: AI News Legal
Published on: Feb 22, 2026
AI marks a civilisational shift, Kerala Chief Justice calls for guardrails as deepfakes threaten judicial integrity

AI as a "civilisational shift": What the bench expects from the bar and the state

At a training programme for government pleaders hosted by the National University of Advanced Legal Studies (NUALS) in Kochi, Kerala High Court Chief Justice and NUALS Chancellor S. Soumen Sen called artificial intelligence a "civilisational shift." His ask was clear: adopt AI with principled legal oversight that protects constitutional values.

He framed the moment bluntly. Law officers sit at the junction of executive power and constitutional accountability. Your duty is to ensure state authority operates within the four corners of the Constitution-especially as AI tools enter research, drafting and litigation strategy.

Why this matters for government pleaders and law officers

  • Constitutional guardrails: Every AI-assisted decision or filing by the State must respect legality, due process and proportionality.
  • Duty of candour: If AI aids your work, make accurate representations to the court-on sources, methodology and reliability.
  • Procurement and policy: Government use of AI needs documented risk assessments, audit trails and clear accountability.
  • Competence: AI can speed research and drafting, but you remain responsible for factual accuracy, citations and ethical use.

Clear risks to judicial integrity

Deepfakes, synthetic evidence and fabricated records can corrode trust in proceedings. The risk isn't theoretical; it's operational-chain of custody, authenticity and reliability are now front-line issues.

  • Strengthen electronic evidence protocols under Section 65B of the Indian Evidence Act and use Section 45A experts when needed.
  • Require source verification, cryptographic hashes, metadata preservation and independent forensic checks for digital media.
  • Seek protective orders addressing generative media, disclosure of creation tools and model settings where relevant.
  • Push for sanctions and adverse inferences where parties tender manipulated or AI-fabricated material.
  • Issue departmental SOPs so police and administrative wings log collection, transformations and review steps for media evidence.

Practical actions for your office this quarter

  • Adopt an AI-use policy: permitted tools, prohibited uses, human review requirements and logging. Include a disclosure standard when AI assists non-substantive tasks.
  • Publish an evidence authentication playbook for images, audio, video and documents: acquisition, hashing, chain-of-custody forms and 65B certification templates.
  • Update discovery checklists to cover prompts, model versions, training data provenance, sampling outputs and red-team notes for AI-generated content.
  • Secure workflows: redact sensitive data before model use, prefer on-prem or vetted platforms, and gate outputs behind human verification.
  • Run training drills on spotting synthetic media and verifying citations; include mock hearings on admissibility and reliability.
  • Set vendor clauses for public-sector AI: audit rights, data retention limits, bias and performance testing, incident reporting and indemnities.

Guardrails for courts and tribunals

  • Issue standing orders on AI-assisted submissions: disclosure, citation verification and responsibility for errors.
  • Adopt reliability factors for AI-derived evidence: methodology transparency, validation studies, error rates, expert competence and independent replication.
  • Standardise digital evidence checklists: 65B compliance, chain-of-custody, tool logs, and where apt, appointment of a neutral expert under Section 45A.

Principled oversight: the test is constitutional

Oversight isn't anti-innovation; it's how we keep faith with the Constitution. Focus on necessity, proportionality, transparency and the right to be heard-especially where AI touches liberty, benefits, entitlements or blacklists.

  • Run pre-deployment risk assessments for any State AI system affecting rights, and publish summaries.
  • Name an accountable officer for each AI system; log decisions, data sources and model updates.
  • Enable independent audits, error reporting and timely remedies where AI contributes to a wrong decision.

Context and convenors

The workshop at NUALS was positioned as a timely response to global shifts linked to AI and the Fourth Industrial Revolution. It was organised by the MK Damodaran International Centre for Excellence in Law, with NUALS Vice-Chancellor G. B. Reddy presiding.

The message was unambiguous: skill up, set guardrails and protect the adjudicatory process.

Further resources

Upskilling for legal and public-sector teams

  • AI for Legal - tools and methods for research, drafting, document review and integrity checks.
  • AI for Government - training for public administration, governance and accountability.

Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)