Computer Says No: WMP Blocks FOI on Copilot Prompts Amid IOPC Investigation

West Midlands Police used Copilot to justify a fan ban, then refused FOI disclosure citing s31(1)(g). With an IOPC probe ongoing, the real test waits until after it wraps.

Categorized in: AI News Legal
Published on: Feb 16, 2026
Computer Says No: WMP Blocks FOI on Copilot Prompts Amid IOPC Investigation

Computer Says No: The Legal Battle for WMP's Secret AI Chat Logs

West Midlands Police reportedly relied on Microsoft Copilot to produce intelligence that led to a ban on Maccabi Tel Aviv fans attending a Europa League match in Birmingham. A request under the Freedom of Information Act sought the prompts used and Copilot's response, raising questions about Prompt Engineering and disclosure. WMP refused disclosure under the Section 31(1)(g) law enforcement exemption. That triggers a familiar question for lawyers working on AI for Legal: is there a path through the Public Interest Test right now?

The Independent Office for Police Conduct has confirmed that the "use of AI, including the prompts used and responses generated," will be part of its investigation. With an active misconduct inquiry, a Public Interest Test is likely to fail at this stage. Once the investigation and any related proceedings conclude, the balance may shift. Timing is doing most of the work here - and the case highlights broader governance and accountability issues in AI for Government.

Why these chat logs matter

Prompts and outputs are not tech trivia; they are decision records. They speak directly to lawfulness, reasonableness, and fairness of police action. For criminal process, they may carry disclosure relevance under CPIA and retention duties under MOPI. For public law and human rights claims, they go to justification and proportionality.

The exemption in play

WMP has pointed to Section 31(1)(g) FOIA. The provision protects information whose disclosure would prejudice the exercise of public authorities' functions for law enforcement. It is qualified: a Public Interest Test applies, and the analysis is highly context-specific. During a live IOPC investigation, prejudice arguments usually bite harder.

Section 31 FOIA (legislation.gov.uk)

Practitioner playbook: what to do now

  • Confirm scope and timing: Obtain written confirmation from the IOPC about the investigation's scope (including AI prompts/outputs) and indicative timelines. This frames both FOIA strategy and any pre-action steps.
  • Preservation notices: Seek confirmation that prompts, system messages, chat logs, and audit trails are preserved. Include vendor-side logs if accessible via contract. Ask for retention under CPIA/MOPI where relevant.
  • Policy and process angle: Request applicable AI/use-of-data policies, DPIAs, Law Enforcement Processing assessments (DPA 2018 Part 3), and any approval records for Copilot's operational use. Even if prompts are withheld, governance artefacts may not be.
  • Narrow, staged FOI requests: While chat content may be withheld now, request metadata: dates, systems used, role-based access, and redacted audit entries. These can often be disclosed without prejudicing investigations.
  • Parallel routes: Consider subject access (for affected individuals), disclosure in criminal proceedings, or civil disclosure where claims are contemplated. FOIA isn't the only door.
  • After the IOPC outcome: Refile for the full prompts and outputs, citing the completed proceedings to recalibrate the Public Interest Test. Build a targeted public interest case around transparency, accountability, and learning for future policing.
  • Vendor evidence: If Microsoft indicates it cannot reproduce the described behavior, seek any correspondence and testing artifacts held by WMP. Ask whether reproduction attempts, model/version IDs, or jailbreak/guardrail settings were recorded.

What to ask for (precisely)

  • Prompt materials: User prompts, system instructions, internal policies injected as context, and any few-shot examples provided to Copilot.
  • Model outputs: The exact text returned, including any confidence notes, citations, or links the system produced.
  • Operational metadata: Timestamps, user IDs/roles (redacted if necessary), model/version identifiers, content filters triggered, and audit logs.
  • Governance records: DPIA/LE processing assessments, procurement/approval documents, and internal guidance for AI-assisted intelligence work.

Risk and liability considerations

If an operational decision was materially influenced by AI output, the authority will need a clear line of reasoning showing independent assessment, policy compliance, and proportionality. Without the prompts and outputs, that chain is weaker. Expect challenges on procedural fairness, misdirection on material facts, and potential Equality Act issues depending on impact.

Where this likely lands

Right now, Section 31(1)(g) is a high hurdle. Once the IOPC process finishes, it becomes harder to argue that releasing historic prompts and outputs would prejudice law enforcement functions. That's the window to reopen FOIA with a strong public interest case and a tightly scoped request.

The takeaway for legal teams: secure preservation today, collect the governance paper trail, and calendar a refiling date post-investigation. The chat logs are not just tech artifacts-they are evidence, and they will shape how AI can be used in policing tomorrow.

Independent Office for Police Conduct


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)