IACP 2025 Quick Take: Buy AI like a lawyer - a cautious playbook for police agencies
AI can boost efficiency in policing, but it also opens the door to risk: opaque models, bias, due process questions and vendor lock-in. The message from IACP 2025 was simple: move, but move carefully.
Most law enforcement AI comes packaged inside third-party software with limited visibility into how it works. That opacity runs against the grain of public trust and accountability - and that's where legal teams must lead.
Procurement and policy: slow is smooth, smooth is fast
Legal advisor Don Zoufal put it plainly: "You're not buying AI. You're buying a product that has AI in it - or will have AI in the future." Treat every software purchase as an AI purchase, even if the vendor downplays it.
Generative systems can create new outputs and new liabilities. They're still largely untested in operational policing. Policy, training and contract language have to account for that uncertainty up front.
What legal should lock into every AI-enabled contract
- Scope and purpose: Authorized uses, prohibited uses and change-management for new features or models.
- Model transparency: Versioning, material-change notice, a plain-language description of how the system makes predictions and known limitations.
- Testing deliverables: Bias and accuracy test results, test data characteristics, and agency rights to validate independently before and after deployment.
- Audit rights: Process and artifacts for audits, logs, performance reports and third-party assessments on a defined cadence.
- Data rights and retention: What the tool can access, who owns outputs, retention/deletion timelines, and a ban on using agency data to train unrelated models without explicit approval.
- Privacy and security: Compliance with CJIS and applicable state privacy laws, incident reporting timelines, and breach remedies.
- Human-in-the-loop: Required human oversight for any decision that affects liberty, employment or services, plus documented fallback procedures.
- Records and discovery: Logging, exportability and retention practices that support public records, Brady/Giglio, and evidentiary standards.
- Accountability: Service levels, performance thresholds, remedies for failure, explainability obligations and a right to suspend or disable.
- IP and indemnity: No hidden training on third-party data that creates infringement risk; vendor indemnifies for IP, privacy and bias claims tied to the product.
- Exit strategy: Data return/secure deletion, cooperation on migration and fee controls on termination.
Oversight tools that make procurement safer
- AI inventory: Catalog every tool using AI, where it runs, what data it touches and who owns it.
- Contract risk assessments: A structured pre-procurement checklist before any RFP or renewal.
- Model and data sheets: Vendor-provided documentation of training data, intended use and limits.
- Architecture review board and AI committee: Cross-functional review for security, legal, operations and ethics.
- Testing and evaluation plan: Pilot gates, red-teaming, and post-deployment monitoring with clear metrics.
- Vendor reporting: Required periodic performance, bias and incident reports.
Use the guidance already on the shelf
Agencies don't need to start from zero. The IACP Technology Policy Framework, NIST's AI Risk Management Framework and federal playbooks outline practical controls for acquisition, testing and monitoring.
Start with NIST's AI RMF and its profile for generative systems. It gives a common vocabulary for risk, measurement and governance that contracts can point to directly. NIST AI RMF
Panelists also noted a gap: broad federal regulation remains thin. State action is accelerating - 45 states and territories introduced AI bills in 2024, and at least 10 passed laws affecting government AI use. Track state-specific requirements in your procurement templates.
Field notes
Chicago Police Department: AI-enhanced training is showing promise through VR/AR. Dynamic role players that adjust in real time and automated performance feedback can improve realism while controlling cost - with the right oversight and data safeguards.
Texas Department of Public Safety: With an inventory of 50+ AI-enabled tools, Texas DPS built structured pre-procurement and vendor review steps. Their focus: clear use cases, data access and maintenance, user training, bias testing, and keeping a human in the loop. They also talk to peer agencies before signing anything - a simple way to avoid preventable mistakes.
Questions legal should insist on before buying
- What problem are we solving, and why is AI the right fit?
- What data will the system access, process or create - and who controls each category?
- Has the vendor tested for bias and accuracy, and will they share methods and results?
- How will performance be monitored, updated and audited over time - and who pays for that?
- Who will use the tool, and what training is required on both the product and policy?
- Is human oversight required for high-impact decisions, and how is it enforced?
- If the system fails or is wrong, what's the remedy and who is accountable?
Bottom line
AI procurement stresses traditional playbooks. Legal teams can steady the process with clear use limits, measurable testing, enforceable vendor obligations and continuous oversight. Educate stakeholders, formalize policy and use structured tools - that's how you adopt AI while protecting public trust.
Need to upskill your team?
If your legal or procurement staff needs a quick way to level up on AI risk, policy and tools, explore curated options by job role: AI courses by job.
Your membership also unlocks: