Stop Governing by Chatbot: Build Public AI That Serves Democracy
Leaders tout consumer chatbots, but public work needs verified data, citations, and oversight. Build sovereign, purpose-built systems with audits, security, and human judgment.

When Politicians Mistake AI Hype for Strategy
When Sweden's prime minister said he uses ChatGPT for a "second opinion," it wasn't a tech flex. It was a warning. Leaders are flirting with consumer AI in places where stakes are public, not personal.
Albania went further with a Ministry of AI and anti-corruption deployments. The question is no longer "should government use AI?" It's "how do we use AI that serves the public without ceding control or judgment?"
Why consumer chatbots don't cut it in government
Generic large language models produce confident text without a built-in duty to be right. They don't verify facts, cite sources, or separate likelihood from truth. As one Swedish columnist put it, chatbots tend to say what you want to hear, not what you need to hear.
There's also the infrastructure trap. Research from MIT's Data Provenance Initiative points to opaque training data and limited visibility into sources. That matters when policy questions touch national security, health, or finance.
Dependence is the bigger risk. If European leaders move sensitive discussions into foreign-owned systems, they trade policy independence for convenience. Corporate "guardrails" aren't a substitute for democratic oversight. Private terms shouldn't set public rules.
What public-fit AI should look like
Use the right tool for the job. In medicine, AI paired with radiologists detected more breast cancers than humans alone in a Swedish study, improving early detection and outcomes. The difference wasn't hype-it was fit-for-purpose design, validated against hard evidence.
Lancet Oncology shows what happens when AI is built, tested, and governed for one clear mission. Public policy needs the same standard.
- Purpose-built systems for public work, not consumer chatbots
- Verified data sources with transparent lineage and audit trails
- Mandatory citations and explainability for high-stakes outputs
- Democratic oversight: public ownership or public-interest governance
- Security, privacy, and data residency baked into the architecture
This isn't theory. The OECD documents practical tools like the UK's AI Consultation Analyzer that scales citizen input without drowning staff. PolicyEngine models fiscal and distributional impacts. Foresight platforms spot legislative patterns and risks. These are specialized tools, not general-purpose chatbots.
OECD AI resources highlight the governance building blocks-transparency, accountability, and impact evaluation-that should anchor any public deployment.
What's missing-and how to fix it
Europe has principles (EU AI Act) but not enough infrastructure. EuroHPC supercomputers were built for science, not scaled training and deployment of general-purpose models for government. Meanwhile, the US is planning massive AI infrastructure investments.
Without compute, data pipelines, and secure platforms, governments default to foreign services. That's a structural disadvantage, not a staffing issue.
- Set a national "public AI" mandate: what AI can and cannot do in policy work
- Invest in sovereign infrastructure: compute, storage, and secure data exchange
- Build public-private consortia with clear oversight and civil society seats
- Adopt interoperability standards across agencies and member states
- Require independent audits for any AI used in public decisions
- Enforce data provenance, residency, and retention rules
- Stand up purpose-built tools: consultation analysis, policy modeling, foresight
- Publish model cards, testing protocols, and human-in-the-loop procedures
- Define red lines: no use of consumer chatbots for sensitive or classified work
- Upskill leaders and teams with structured, hands-on training and playbooks
If your agency needs practical upskilling paths, explore focused learning tracks by role at Complete AI Training.
AI that strengthens-not replaces-democracy
AI should expand capacity, not outsource judgment. It can summarize citizen input, surface patterns in complex data, and test policy scenarios. Humans still decide, explain, and own the outcome.
As Margrethe Vestager said, "The way we deal with technology also shows what we expect of our democracy and of our societies." Expect more. Demand purpose-built systems, audited processes, and accountable partnerships. Citizens deserve nothing less.