Legal Concerns Over Police Use of AI Chatbots for Crime Report Drafting
Oklahoma City police Sgt. Matt Gilmore recently tested an AI tool that generated a first draft of a crime report in just eight seconds. Using audio and radio chatter captured by his body camera, the AI produced a detailed narrative that Gilmore described as “better than I could have ever written” and “100% accurate.” The report even included details he didn’t remember hearing, such as the color of the car suspects fled in.
This technology, developed by Axon—the company behind Tasers and many police body cameras—leverages the same generative AI model as ChatGPT. It’s being piloted by several police departments to save officers time on report writing, a task many find tedious. However, this innovation has sparked significant legal and ethical concerns.
Balancing Efficiency and Accountability
Axon’s CEO Rick Smith highlighted that officers prefer focusing on police work rather than paperwork. The AI product, called Draft One, has received positive feedback from users. Despite this, Smith acknowledged the concerns prosecutors express about reports authored primarily by AI. District attorneys want to ensure officers remain responsible for the content because they must testify in court regarding the events described.
The fear is that police officers might defer too much to AI-generated content, potentially complicating legal proceedings if officers claim, “The AI wrote that, I didn’t.” This raises questions about the authenticity and reliability of reports in criminal cases.
AI in Policing: A Broader Context
AI is not new to law enforcement. Agencies already use it for license plate recognition, facial identification, gunshot detection, and crime prediction. These tools have been controversial due to privacy and civil rights concerns, prompting some legislative safeguards. However, AI-generated police reports are a newer development with few established guidelines.
Community activists like Aurelius Francisco express deep worry about the implications. Francisco pointed out that the same company supplying Tasers now provides AI tools that could make surveillance and harassment easier, disproportionately affecting Black and brown communities. The automation of reports might simplify police work but could worsen systemic issues.
Limited Use and Varied Adoption
In Oklahoma City, AI-generated reports are currently limited to minor incidents that do not lead to arrests or serious charges. Police Captain Jason Bussert emphasized restrictions on using the tool for felonies or violent crimes. Conversely, other cities like Lafayette, Indiana, allow AI drafts on all case types, with officers embracing the technology.
Some departments found the AI struggles in noisy environments, such as Fort Collins, Colorado, where downtown bar district patrols produce audio too cluttered for accurate transcription.
Technical Challenges and Ethical Cautions
Axon initially experimented with computer vision to summarize video footage but halted development due to concerns about insensitivity and bias. The current focus remains on audio transcription, with the AI tuned to limit “hallucinations” or fabricated details common in general-purpose AI models.
Noah Spitzer-Williams of Axon explained that their system uses the same technology as ChatGPT but with custom controls to prioritize factual accuracy over creative language. This is critical in ensuring reports do not invent or distort facts.
Legal Implications and the Need for Public Debate
Legal scholars urge caution. Professor Andrew Ferguson from American University highlighted risks that officers might become less careful in writing reports if they rely too heavily on AI. Since police reports can determine whether a person loses their liberty, their accuracy and accountability are paramount.
Human-generated reports have flaws, but it remains unclear if AI-generated ones are more or less reliable. Ferguson calls for public discussion about the benefits and risks before widespread adoption.
Changing Police Behavior and Future Use
Some officers report that AI prompts them to narrate events more clearly during incidents, improving the quality of captured evidence. Captain Bussert expects officers will grow increasingly verbal to help AI generate better reports.
After processing a traffic stop video, Sgt. Gilmore noted that the AI produced a fully accurate, narrative-style report in seconds, requiring no edits. Officers using the tool must disclose AI assistance by checking a box on the final report.
Conclusion
AI-generated police reports promise efficiency gains but raise serious legal and ethical questions. Ensuring officers remain accountable and preventing AI biases or inaccuracies from affecting justice are critical challenges. As this technology spreads, clear policies and open discussions will be essential.
Your membership also unlocks: