Vera Institute calls for strict safeguards on AI use in criminal justice and immigration systems

Facial recognition wrongly identified Robert Williams as a robbery suspect, landing him in jail overnight in front of his daughters. The Vera Institute now urges strict safeguards before AI tools are used in criminal justice.

Categorized in: AI News General Government
Published on: Mar 19, 2026
Vera Institute calls for strict safeguards on AI use in criminal justice and immigration systems

Criminal Justice Agencies Need Safeguards Before Deploying AI Systems

Artificial intelligence is spreading through criminal justice and immigration systems despite documented risks of errors and bias. The stakes are too high for experimental tools that can send innocent people to prison.

Robert Williams was arrested and jailed after facial recognition technology wrongly matched him to a robbery suspect using grainy surveillance footage and his old driver's license photo. Detroit police obtained an arrest warrant based on what Williams later described as "an out-of-focus image of a large Black man in a baseball cap that a faulty algorithm had determined was me." He spent a night on a jail cell floor, handcuffed in front of his young daughters, before an officer admitted the computer had made a mistake.

The Vera Institute of Justice developed five accountability principles to guide how AI should be deployed safely and transparently in these systems. Karen Tan, Vera's director of innovation and strategy, said the problem is straightforward: "In the criminal justice field, unchecked AI can destroy people's lives."

Where AI Systems Fail

Facial recognition produces higher rates of false matches for Asian, Black, and Native American faces compared to white faces, according to widespread testing.

Predictive risk assessment tools trained on crime data reinforce historic patterns of overpolicing marginalized communities. These systems amplify existing racial inequities rather than correct them.

License plate readers misidentify stolen vehicles, leading to innocent people being stopped and arrested at gunpoint.

Bail, sentencing, and parole recommendation systems operate as "black boxes" with opaque decision-making. Attorneys cannot scrutinize or challenge how these algorithms reach their conclusions.

What Responsible Deployment Requires

AI should only be used to shrink the footprint of mass incarceration and reduce unnecessary criminalization-not expand surveillance or increase police deployment.

Before any implementation, agencies must weigh whether potential benefits outweigh the risks of biased outcomes, user errors, and lack of transparency. They must also verify data quality, train users thoroughly, and plan for ongoing maintenance.

Agencies must disclose their use of AI and explain the safeguards they've installed. Public visibility allows communities to monitor deployment, identify bias or errors, and hold officials accountable.

Human judgment must remain central. A designated person or entity must be responsible for the system's outputs and must perform regular checks for errors, bias, and ethical compliance.

Williams told California lawmakers that police treated his facial recognition match as proof rather than a lead. "They should have collected corroborating evidence such as an eyewitness identification, cell phone location data or a fingerprint," he wrote.

Tan emphasized the core problem: "Unchecked AI entrenches existing biases in the system. This makes the justice system more unjust at scale."

As AI adoption spreads, safeguards remain critical to prevent harm to innocent people and ensure technology serves the public good rather than undermining it.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)