ChatGPT and Lessons for Government AI Use
Federal agencies see the upside of generative AI, but data-protection rules draw clear lines. Many programs still limit or ban public tools like ChatGPT even as staff look for faster ways to analyze, summarize, and draft.
A recent episode shows why those limits exist-and how oversight should work.
What happened
According to Politico, a senior official at the Cybersecurity and Infrastructure Security Agency (CISA) received permission to try a public AI platform. During that use, sensitive but unclassified contracting details were flagged by existing security controls.
Routine auditing surfaced the issue. The agency launched an internal review, checked for impact, and reinforced safeguards and guidance.
Public LLMs and data hygiene
Dr. Jim Purtilo, associate professor of computer science at the University of Maryland, said public AI platforms raise real questions about data handling. His guidance is blunt: treat anything you enter into a public AI tool as non-private.
He also noted the good news: the agency's controls caught the material. That created a chance to fix the workflow and strengthen training.
Unclassified still means sensitive
Experts stress that "unclassified" doesn't equal "safe to share." Ensar Seker, CISO at SOCRadar, pointed out that "for official use only" (now CUI) contracting documents are sensitive by design. They can reveal vendors, pricing, internal processes, and operational dependencies.
Uploading those details to a public AI service creates an uncontrolled dissemination point. You can't fully verify retention, reuse, or downstream exposure-even if there's no malicious intent. For reference, see federal guidance on Controlled Unclassified Information (CUI).
DHS shows one path: keep data inside the fence
ChatGPT isn't approved for Department of Homeland Security (DHS) employees. Instead, they can use DHSChat, the agency's AI-powered chatbot. Most DHS AI tools are configured so document inputs don't leave federal networks.
This is the pattern many agencies are adopting: approved, internal AI access with logging, filtering, and data loss prevention at the edge.
The bigger lesson: culture plus controls
Lt. Gen. Ross Coffman (U.S. Army, Ret.), president of Forward Edge-AI, underscored the stakes: proper handling of classified and CUI/FOUO data protects the country's information from adversaries. Once placed in commercial large language models, it can become accessible to all.
He added that combining multiple CUI/FOUO sources can raise the required classification to Secret or higher. LLMs make that consolidation easy-even in an unclassified setting. Seker summed it up: cybersecurity maturity isn't defined by tools alone; it's defined by consistent behavior, especially from leadership.
What leaders should do now
- Default to approved environments: Offer an agency LLM gateway or vetted tools so staff have a safe option.
- Classify before you copy: If data is CUI/FOUO or contracting-sensitive, it does not belong in a public AI tool.
- Tighten exception processes: Time-bound approvals, explicit risk reviews, logging, and quick revocation paths.
- Enforce technical guardrails: DLP rules for AI sites, egress controls, audit trails, content filters, and model access policies.
- Train with real scenarios: Show exactly what not to paste (procurement docs, PII, operational details) and why.
- Update contracts and NDAs: Add AI-use clauses, retention limits, spill response, and third-party model restrictions.
- Red-team and test: Periodically probe your AI gateways for data leakage and prompt-based bypasses.
- Incident playbooks: Clear steps for detection, reporting, containment, and user comms when an AI-related event occurs.
- Human review by default: Keep sensitive outputs and decisions under human oversight.
- Governance that keeps pace: Maintain a living inventory of AI use cases, models, datasets, and approvals.
Oversight that actually works
This case is a reminder that layered safeguards matter. Automated alerts, audits, and internal reviews did their job and surfaced the issue early.
That's the goal: catch the problem, correct behavior, refine policy, and improve training. If you're building or updating controls, the NIST AI Risk Management Framework is a solid reference.
Build skills without leaking data
Humans remain the first line of defense. Invest in practical training that teaches safe prompts, data scoping, and approved workflows-so people stop reaching for public tools out of convenience.
For role-based AI upskilling, see curated options at Complete AI Training.
Your membership also unlocks: