Six-Fingered CT DEEP Post Exposes Connecticut's AI Growing Pains

CT's six-finger AI post shows how embedded tools can trip teams. Build live inventories, enterprise settings, human review, and clear data rules to protect trust.

Categorized in: AI News Government
Published on: Sep 20, 2025
Six-Fingered CT DEEP Post Exposes Connecticut's AI Growing Pains

Six fingers, a lesson: What Connecticut's AI slip means for government teams

Connecticut's Department of Energy and Environmental Protection shared an AI-generated image that didn't pass the sniff test: six fingers on one hand, gibberish text on clothing, and a turkey in a blaze-orange vest. The post drew mockery and was pulled. It was quickly replaced with a real photo - and a note: this isn't standard practice.

That small mistake signals a bigger truth. AI is now baked into tools your teams already use. Without clear guardrails, it will leak into public channels, operations, and decisions - whether you plan for it or not.

What happened - and what it signals

The state encourages staff to use generative AI with tight oversight to offload busywork and improve productivity. Monitoring that across 40+ agencies and 46,000 employees is hard. As CIO Mark Raymond put it, AI is "creeping in, in ways that are harder to detect and harder to raise awareness of."

The image came from Canva, a product that didn't start with AI. That's why it wasn't on last year's AI inventory. The takeaway: vendors keep adding AI features after procurement. Your inventory and risk reviews must be continuous, not one-and-done.

AI in use across Connecticut agencies

State law created a working group to oversee AI and requires an annual inventory of AI use. Agencies are already using AI to scan regulations, detect threats, and surface misinformation risks. Administrative Services is piloting ChatGPT and Microsoft Copilot in secure configurations, and Copilot chat is available to all CT.gov email users.

  • Regulatory review: Kira compares statutory language to industry filings for compliance checks.
  • Email threat defense: Abnormal Security flags and responds to phishing and other email attacks in real time.
  • Endpoint protection: CrowdStrike uses AI to visualize and stop cyber threats.
  • Election integrity: Pyrra spots potential misinformation about CT election laws on social platforms.
  • Workplace tools: Microsoft 365 apps, Teams, WordPress, and Zoom now ship with AI features, so they're on the inventory too.

The risk picture

AI features add new attack surfaces and failure modes. As Vahid Behzadan notes, AI systems are cyber systems - they bring their own vulnerabilities. Accuracy, bias, trustworthiness, and clear responsibility must be addressed, especially where services affect people's lives.

Data protection is non-negotiable. The state's framework asks: What data are we using? Where does it go? How is it protected? Some pilots guarantee data stays within Connecticut and out of model training - but that requires paid, enterprise configurations. As Raymond warned, if the tool is free, your data is the price. The state trains employees not to use consumer-grade tools for official work.

Guardrails that work for government

  • Live AI inventory: Maintain an evergreen list of tools with AI features, including those added post-procurement. Review quarterly.
  • Vendor AI change tracking: Require vendors to notify you before enabling new AI features; default to opt-in, not opt-out.
  • Data minimization and isolation: Classify data; block high-risk data from generative tools; use tenant-isolated, no-training configurations.
  • Human oversight by default: Review outputs for facts, bias, and source integrity. Keep humans in the loop for public-facing content and decisions.
  • Public communications policy: No AI-generated people in official imagery without explicit review and disclosure. Establish visual QA checks (fingers, text artifacts, insignias).
  • Incident playbook: Define how to pull content, issue corrections, and audit root causes within hours.
  • Training and accountability: Train staff on prompt safety, data handling, and tool selection. Name owners for approvals and audits.
  • Risk framework alignment: Map use cases to a recognized standard like the NIST AI Risk Management Framework.

Do this this week

  • Audit your team's toolset. List where AI features exist today (Office, Teams, design tools, CMS, email security) and who can enable them.
  • Lock in enterprise settings. Enable approved AI (e.g., Copilot chat) with tenant protections; block consumer chatbots on state devices. See Microsoft's privacy and data controls for Copilot here.
  • Set a public-facing content rule: No AI imagery without editor review; require a pre-post checklist for artifacts and policy flags.
  • Define data rules in plain language: What can be put into AI tools, what can't, and which tools are approved for which data classes.
  • Stand up a quick feedback loop: A single form or channel to report AI issues, with a named owner and 24-hour response SLA.

The upside - applied to resident outcomes

Used well, AI can scale translation, accessibility, and situational awareness during storms or emergencies. It can help residents find programs without bouncing across agencies. As Raymond framed it: "Dear Connecticut, I have a need right now. Tell me all the different things that I might qualify for."

That future depends on disciplined use today: secure configs, clear policy, constant inventory, and humans accountable for outcomes. Small lapses erode trust fast; small habits build it back.

Resources

Bottom line: Treat AI like any high-impact system - inventory it, constrain it, review it, and make someone responsible for every public output. That's how you keep trust while you ship useful work.