Maharashtra taps IIT Bombay for speech AI to flag suspected illegal Bangladeshi nationals and Rohingyas, 60% accuracy claim sparks political row

Maharashtra and IIT Bombay are piloting a speech-based AI to flag suspected illegal entrants, with about 60% accuracy. Officials promise safeguards, audits, phased rollout.

Categorized in: AI News Government
Published on: Jan 25, 2026
Maharashtra taps IIT Bombay for speech AI to flag suspected illegal Bangladeshi nationals and Rohingyas, 60% accuracy claim sparks political row

Maharashtra, IIT Bombay Build AI Tool To Flag Suspected Illegal Immigrants: What Government Teams Should Prepare For

The Maharashtra government has partnered with IIT Bombay to test a language-based AI tool that helps flag suspected illegal Bangladeshi nationals and Rohingyas during field interactions.

The state's IT department is leading a Rs 3-crore effort that analyses speech patterns, tone, and linguistic usage. According to Chief Minister Devendra Fadnavis, three months of testing have shown about 60 per cent reliability. He said the goal is to push reliability higher in the next five to six months before wider deployment.

How it's intended to work

  • Initial screening: The tool provides a preliminary signal based on spoken language cues during police or field interactions.
  • Not a final decision: It supports, but does not replace, full document-based nationality verification by the police.
  • Operational aim: Fadnavis said the government seeks to identify and deport all Bangladeshis staying illegally in Mumbai, following proper investigations.

Officials note that many illegal entrants obtain fraudulent Indian documents after entry via West Bengal, which complicates identification. The government has also moved to secure land for detention centres, with the Brihanmumbai Municipal Corporation already allotting land.

Governance and risk controls for agencies

  • Clear legal footing and SOPs: Use the model only for preliminary screening. No coercive action should be based solely on AI output. Keep audit logs for each use.
  • Accuracy and fairness: Dialect variation across Bengali, Hindi, Marathi, and mixed speech can trigger false positives. Require third-party testing, calibration on local data, and periodic performance reports.
  • Data protection: Limit collection to what is necessary, define retention periods, and secure audio data. Align practice with the Digital Personal Data Protection Act, 2023.
  • Human oversight: Ensure trained officers review outputs, involve interpreters when needed, and document reasons for subsequent action.
  • Grievance and review: Provide a channel to contest screening outcomes and a rapid supervisory review for contested cases.
  • Transparency: Publish aggregate metrics (hit rates, false positives, officer feedback) without exposing operational details.

Deployment notes for department heads

  • Pilot first: Start with limited precincts and structured A/B comparisons against existing processes.
  • Field readiness: Specify device needs (on-device vs. secure cloud inference), offline capability, language support, and latency targets.
  • Training: Build short, scenario-based modules for officers to interpret model outputs, avoid profiling, and follow evidence-led verification.
  • Inter-agency workflows: Coordinate with police units, BMC for detention logistics, and MHA procedures under the Foreigners Act for due process.
  • Independent oversight: Constitute a review group with technical, legal, and civil liberties expertise to audit outcomes.

Political and accountability context

The Chief Minister has announced land acquisition plans for detention centres, backed by BMC allotments. The Congress party has questioned deportation claims; spokesperson Sachin Sawant has asked the government to publish figures if record deportations have occurred. Expect continued scrutiny of accuracy, due process, and outcomes.

Useful references

  • Responsible AI guidance: NITI Aayog's "Responsible AI for All" approach document offers a practical framework for risk and oversight. Read the document.
  • Technology partner: IIT Bombay.

Upskilling for public-sector teams

If your department is planning AI pilots or oversight functions, consider short courses to build a common baseline across legal, operational, and data teams. A curated starting point is here: AI courses by job role.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide