Ensuring a Human Firewall: Ethical Standards for AI in Government and Police Communications
On July 2, 2025, the Westbrook (Maine) Police Department posted an altered, AI-generated photo on Facebook following a drug seizure. An officer had used ChatGPT to add a department badge, unaware that the tool would also distort other visual elements. The image included garbled text and missing objects, which quickly drew public criticism. After initially denying AI involvement, the department corrected the post with the original photo and pledged to stop using AI in social media content.
This incident highlights the critical need for clear, enforceable AI policies in public safety agencies. As AI tools become more accessible, informal use without oversight can easily lead to errors that damage public trust.
The Stakes of AI Missteps in Public Safety Communication
Technology has long been a tool for improving government and policing efficiency. Today, generative AI is changing how agencies communicate and serve the public. Some even consider AI for emergency response. But with global AI investments expected to exceed $1 trillion by 2030, public agencies must weigh both the benefits and risks carefully.
Police departments have started integrating media relations and social media policies into standard operating procedures (SOPs). Now, they must also develop governing documents for AI use—creating a human firewall to regulate and limit AI’s role before mistakes erode public confidence. Accreditation standards should include AI policy requirements to ensure responsible adoption.
At JGPR, a consultancy serving over 500 police departments across 17 states, the approach to AI is cautious and transparent. AI is never used to generate direct communication with the public. Instead, it assists internally by analyzing data, generating ideas, proofreading content, and suggesting story angles. The principle is clear: AI works with the public information officer (PIO), but the PIO always communicates directly with constituents and media.
A New Kind of Risk
Public trust is the cornerstone of police, fire, and government communication. Trust breaks down when people feel they’re interacting with machines rather than humans. This frustration is familiar when customers get chatbots instead of real representatives in other sectors.
In government, that disconnect can undermine confidence in institutions, especially when people seek help affecting their safety or well-being. AI’s potential to improve workflows is significant, but if taxpayers feel misled or kept in the dark about who—or what—is communicating, trust is lost.
Transparency is crucial. Using AI without clear disclosure risks lasting damage to an agency’s reputation.
Key Ethical Guidelines for AI Use
- AI must never be the sole author of constituent-facing communication.
- Human oversight is mandatory whenever AI interacts with the public.
- Constituents must always be notified when AI is involved in communications.
Drafts, Not Decisions
Rather than banning generative AI, agencies should treat AI tools like ChatGPT, Copilot, and Gemini as assistants that create first drafts. These drafts require thorough human review, editing, and fact-checking before release. AI should support, not replace, human judgment and direct communication in police, fire, EMS, or government settings.
To manage AI responsibly, divide policy into three categories: “Must,” “May,” and “Must Never.” Below is a proposed framework for police departments that can be adapted to fit specific needs.
We Must:
- Use AI transparently, clearly informing constituents when it’s employed via disclaimers, labels, or watermarks.
- Verify all claims, statistics, and facts generated by AI before sharing.
- Recognize that AI software is developed by private entities, often for-profit or foreign, which affects data and responses. Never input confidential information into AI systems.
- Understand that AI outputs are imperfect despite training through human feedback.
- Commit to thorough training on AI tools before live deployment to align them with agency priorities.
- Provide ongoing training for staff, as one-time sessions are insufficient.
- Assign clear human responsibility for any errors or omissions in AI-generated content.
We May:
- Use AI to draft initial versions of content, with the understanding these drafts require human editing and fact-checking.
- Employ AI to analyze large datasets like surveys or financial reports, while maintaining responsibility for verifying and interpreting the results.
We Must Never:
- Allow AI to publish communications directly to constituents without human review and approval.
- Use AI in critical, real-time public safety interactions that require empathy, urgency, and accountability.
- Communicate with constituents via AI without clearly disclosing its use.
- Replace human-written police, fire, or EMS incident reports with AI-generated content. Any AI-generated reports must be reviewed and signed off by a human.
- Violate intellectual property rights by publishing AI-generated content that infringes on copyrights or trademarks.
This framework doesn’t block the use of AI but encourages careful, responsible integration. AI tools are marketed to streamline constituent interactions—from answering routine questions to managing service requests—but they require ongoing human oversight to ensure accuracy and maintain public trust.
In policing and government communications, the product is public safety and trust. These cannot be risked on untested AI systems in real-world environments.
Your membership also unlocks: