Las Vegas police AI system raises accountability questions
Las Vegas police are rolling out an artificial intelligence system designed to streamline operations, but privacy advocates warn the technology could erode constitutional protections and create accountability gaps that are difficult to address.
Clark County Sheriff Kevin McMahill announced the K.V.N. Project - Knowledge, Value and Network - during the department's February address. The system will connect separate police technology platforms to improve information flow, McMahill said.
The Metropolitan Police Department is starting with administrative tasks, such as processing public records requests. Eventually, investigators will use AI to parse databases and construct event timelines - work that currently takes hours but could be completed in minutes, McMahill told the Las Vegas Review-Journal.
Privacy concerns outpace transparency
Chris Peterson, legal director of the American Civil Liberties Union of Nevada, said the deployment raises fundamental questions about what data Metro collects and how it uses that information. "To be frank, LVMPD should be concerned about the constitutionality of what they're doing," Peterson said.
Peterson flagged a specific problem: accountability. It's straightforward to hold an individual officer responsible for a biased report. But how do you hold a department accountable for arrests or stops initiated by an AI system?
Beryl Lipton, a journalism professor at the University of Nevada, Reno, and investigative researcher at the Electronic Frontier Foundation, added that machine learning models typically reflect biases in their training data. This could cause police AI to inadvertently target certain groups.
AI systems also produce misidentifications and what researchers call "hallucinations" - confident false outputs. "That's something that we know artificial intelligence does all the time," Lipton said.
Predictive policing and mass surveillance
Lipton authored a chapter on AI in policing for the American Bar Association's 2024 State of Criminal Justice report. She wrote that AI can enable police to conduct "mass privacy invasion" and make inequalities in policing more routine.
Las Vegas police have already expanded surveillance through drone programs and real-time crime centers. Those systems require infrastructure - networks of video cameras - that can infringe on innocent bystanders' privacy.
Predictive policing presents another risk. This approach uses crime trends to deploy officers strategically to prevent crimes. But it can reinforce existing patterns: if a neighborhood has been over-policed, the AI will identify it as a crime hotspot and send more officers there, perpetuating the cycle.
"When you feed that into a machine and ask the machine where the crimes are going to occur, it's just going to spit out the places that you have previously identified crimes," Lipton said. "That's not really a great way to police if you want to police fairly."
Peterson acknowledged that AI could hold officers accountable for misconduct. But he warned that departments could also use AI to justify mass surveillance in the name of public safety. "It's not going to be limited to administrative processes," he said.
McMahill pledges transparency; policies remain unclear
During a March 5 interview, McMahill acknowledged privacy concerns and said transparency is essential. He emphasized that the K.V.N. Project's self-learning capability will reduce administrative work.
"I want to be transparent about what it is that we're doing and I want the community to understand what we're doing," McMahill said.
Without state legislation governing police AI use, Lipton and Peterson said Metro should publish its AI policies and procedures publicly. This would inform residents about what data is collected about them in public spaces.
Departments should also establish clear accountability measures. "There has to be a human being who is being held accountable for these decisions," Lipton said.
When asked whether Metro has established AI guidelines, McMahill did not respond directly. A police information officer was not immediately available to provide details about the department's AI policies.
For government professionals evaluating AI adoption, understanding these governance and accountability questions is critical. AI Learning Path for Policy Makers covers the policy frameworks and oversight mechanisms that should guide such deployments.
Your membership also unlocks: