Building Public Trust in AI: Ethical Standards for Police and Government Communication

A Maine police AI-edited photo sparked backlash, underscoring the need for clear AI policies in public safety. Human oversight and transparency are essential to maintain trust.

Categorized in: AI News PR and Communications
Published on: Jul 03, 2025
Building Public Trust in AI: Ethical Standards for Police and Government Communication

Ensuring a Human Firewall: Ethical Standards for AI in Government and Police Communications

On July 2, 2025, the Westbrook (Maine) Police Department posted an AI-altered photo on Facebook following a drug seizure. An officer used ChatGPT to add a department badge but didn’t realize the tool would alter other parts of the image. The post showed distortions like garbled text and missing objects, which drew public criticism. After initially denying AI use, the department corrected the post and pledged to stop using AI in social media content.

This incident highlights the urgent need for clear, enforceable AI policies in public safety agencies. As AI tools become easier to access, informal use is no longer enough. Agencies must set ethical boundaries before a simple mistake turns into a major breach of trust.

The Stakes of AI Missteps in Public Safety Communication

Technology has long promised improved efficiency for government and policing, from spreadsheets to search engines. Now, generative AI is changing how agencies communicate and interact with the public. Some are even considering AI for emergency response. But with global AI investment expected to exceed $1 trillion by 2030, the risks are significant.

Police departments have recently integrated media relations and social media policies into their standard operating procedures. It’s time to add AI governance to that list, creating a "human firewall" that limits and controls AI’s role. Accreditation standards should include AI policy requirements to ensure responsible use.

One public relations consultancy working with over 500 police departments across 17 states takes a cautious and transparent approach. AI is never used for direct communications with the public but helps with tasks like data analysis, proofreading for style consistency, and generating content ideas. In short, AI assists public information officers (PIOs) but does not replace their direct communication with constituents and media.

A New Kind of Risk

Public trust is essential for police, fire, and government communication. That trust erodes when people feel they’re hearing from a machine instead of a person. Frustration with chatbots is common in customer service, but in government, the impact is greater because it affects confidence in public institutions.

AI can improve workflows, but if taxpayers feel misled or kept in the dark about who—or what—is communicating, trust suffers. Using AI without clear disclosure damages reputations and can cause long-term harm.

When drafting AI ethics policies, keep these non-negotiable points in mind:

  • AI must never be the sole author of constituent-facing communication.
  • AI must never communicate with the public without human oversight.
  • Constituents must always be informed when AI is part of the communication process.

Drafts, Not Decisions

AI shouldn’t be banned outright. Instead, large language models (LLMs) like ChatGPT, Copilot, or Gemini can be useful tools for generating first drafts. But every AI-generated draft must be reviewed, edited, and approved by a human. AI can support, but it can’t replace human judgment or communication in public safety and government.

Consider framing your AI policies around what your agency must, may, and must never do with AI. Below is a suggested set of principles for police departments that you can adapt.

We Must:

  • Be transparent about AI use. Always declare when AI is involved using disclaimers, labels, or watermarks.
  • Verify all AI-provided facts, claims, and statistics. Human verification is crucial.
  • Recognize that AI tools are developed by private companies, sometimes foreign entities. Be cautious about data privacy and never input confidential information into AI systems.
  • Understand that AI outputs are not perfect despite ongoing training and human feedback.
  • Commit to learning how to use AI tools properly before deploying them live, ensuring alignment with your agency’s priorities.
  • Provide ongoing training for staff on AI technologies. One-time training is not enough.
  • Assign a human responsible for AI errors. Humans must be accountable for AI-assisted content and decisions.

We May:

  • Use AI to draft initial content that is then reviewed and fact-checked by humans.
  • Leverage AI to analyze large datasets like surveys, financial reports, or ballot results, with human oversight on the analysis and conclusions.

We Must Never:

  • Allow AI to communicate directly with constituents without human review and approval.
  • Use AI in critical, real-time public safety interactions where urgency and empathy are required.
  • Communicate with constituents through AI without clear disclosure that they are interacting with software, not a person.
  • Replace human writing in police, fire, or EMS incident reports with AI-generated content. Humans must review and sign off on all reports, especially those used as evidence.
  • Violate intellectual property rights. Verify AI-generated content to avoid copyright or trademark infringements before publishing.

These guidelines aren’t meant to slow progress but to encourage responsible adoption of AI in government communications. AI tools can help streamline routine tasks, but they are not plug-and-play solutions. Without ongoing human oversight and clear ethical boundaries, AI risks compromising public trust.

Remember, police and public safety agencies don’t sell products—they provide security and safety. This is not an area for experimental AI use without strict controls.

For communications professionals looking to deepen their understanding of AI in their field, consider exploring relevant training courses at Complete AI Training.