Axon's AI-Driven Police Reporting Tool Sparks Serious Concerns
In April 2024, Axon, the leading American police technology company best known for body cameras and Tasers, launched Draft One — an AI-powered software that transforms body camera footage and audio into written police reports. Axon now supports over 5,000 police departments with cloud-based evidence management and has grown into a $50 billion law enforcement tech provider.
Draft One integrates with Axon’s body cameras and evidence storage systems to generate reports with minimal human input. More than 20 police departments have tested the software. However, civil rights groups like the Electronic Frontier Foundation (EFF) and ACLU raise alarms about the use of generative AI in law enforcement. Their concerns focus on AI’s tendency toward racial and gender bias and the risk of “hallucinations” — false or fabricated information appearing in AI-generated texts. One police captain noted, “I can almost guarantee [AI] reports have been used in plea deals.”
AI Safeguards Often Disabled in Practice
Axon claims to have fine-tuned its AI system, a custom version of OpenAI’s ChatGPT, to reduce errors and hallucinations. The company also built in safety features such as inserting intentional errors into drafts and requiring officers to edit reports before finalizing them, ensuring human review.
Yet, records show many police departments deactivate these safeguards. Documents obtained through public records requests reveal that departments often switch off transparency features, including headers or footers marking reports as AI-generated. For example, Lafayette Police Department in Indiana removed these markers despite Axon’s intention to promote transparency.
EFF highlights that Axon’s software does not track which parts of a report were AI-written, complicating efforts to audit or verify these documents. Lafayette’s emails also suggest AI-generated reports have been used in plea agreements. When asked, the department claimed they could not identify which cases involved AI reports.
Andrew Ferguson, a law professor at George Washington University, emphasizes the risks: “Judges, prosecutors, defense lawyers, and defendants assume reports come from officers’ sworn memories. When AI inserts inaccuracies, it undermines this trust.”
Widespread Deactivation of Transparency Features
Lafayette is not alone. Fort Collins, Colorado’s police also disabled AI-identifying footers. Ferguson calls this an “unjustified risk,” stressing that transparency is critical when deploying new, untested technology. He warns that AI hallucinations could result in officers unknowingly submitting false information in court.
Research consistently shows generative AI models exhibit biases against women and minority groups, a concern echoed in a 2024 ACLU report. Such biases risk deepening existing inequalities in policing and eroding community trust.
Company and Legal Responses
Axon says its development follows guiding principles to “serve as a force for good.” Still, its SEC filings acknowledge potential risks: AI failures could expose users to operational and legal challenges, especially in sensitive law enforcement contexts. The filings also admit possible biases in AI datasets, aligning with concerns raised by the ACLU.
Despite available features designed to ensure officer review of AI-generated content, most departments have turned these off. Among seven departments responding to records requests, only South Jordan, Utah, maintained mandatory officer input settings — though even there officers could bypass the requirement.
Scope of AI Report Usage and Cost Barriers
Draft One is used for a wide array of incidents, including serious felonies. Most departments do not restrict the software’s use by crime type. For instance, South Jordan generated over 900 AI-assisted reports on various cases between September 2024 and April 2025. Fresno Police Department, California, used the tool for more than 3,000 incidents within a similar timeframe and plans to expand its use in cooperation with local prosecutors.
Utah passed the first state law requiring police to disclose AI use. California and Seattle are considering similar measures. Utah State Senator Stephanie Pitcher, a former defense attorney, stresses the importance of transparency for ensuring fair trials. Without clear identification of AI-generated reports, defense teams may face delays or challenges in obtaining critical evidence, impacting defendants’ rights.
Caroline Sinders, founder of Convocation Research and Design, highlights the troubling design choice that allows disabling accountability features. “Why have these settings optional when dealing with crucial case documents?” she asks.
Effectiveness and Adoption Challenges
Reports from Anchorage, Alaska, and New Hampshire indicate Draft One does not significantly reduce officers’ workload. Additionally, the high cost of operating AI models—tens of thousands of dollars annually—limits enthusiasm among law enforcement personnel.
Axon emails reveal low adoption rates in Lafayette, with less than 25% of eligible officers using the tool, despite departmental support. The company acknowledges these “adoption challenges” and the expense of running AI models.
Key Takeaways for Government Professionals
- AI-generated police reports can introduce bias and inaccuracies, raising serious legal and ethical issues.
- Transparency features marking AI involvement are often disabled, undermining accountability.
- Human oversight mechanisms designed to catch errors are frequently turned off, increasing risks.
- Legislative action is emerging to require disclosure of AI use in law enforcement documentation.
- Cost and officer resistance may limit the practical benefits of AI reporting tools.
For government officials overseeing law enforcement technology, it’s crucial to demand transparency, maintain rigorous human review, and weigh AI’s risks against its intended efficiencies. Ensuring that AI tools do not compromise justice or community trust must remain a top priority.
To learn more about AI applications and responsible use in government settings, explore Complete AI Training's government-focused courses.
Your membership also unlocks:
 
             
             
                            
                            
                           