Invisible shift: AI quietly saturates the federal administration
AI in the federal government is no longer a list of pilot projects. It's embedded in everyday tools-firewalls, word processors, office suites. The standout "Project X" has turned into a background utility. That's progress, but it also makes visibility, control, and accountability harder.
From pilots to platforms: MaKI and Kipitz
The federal approach has moved from scattered prototypes to shared infrastructure. The Marketplace of AI Possibilities (MaKI) acts as a transparency register and matching platform so authorities don't "rebuild the wheel." Since November 2024, federal states and municipalities can access it-useful for alignment across the patchwork.
The operational backbone is the AI Platform for the Federal Administration (Kipitz), run by ITZBund. It offers generative AI models through a secure interface-built as a closed-source solution that runs open-source models internally. Planned funds for 2026: 1.7 million euros for the platform, and roughly 40 million euros for hardware. The goal is simple: keep sensitive data off external vendor servers while giving staff trusted tools.
Digital sovereignty: open models, switchability, and a gap
A recent Fraunhofer analysis points to viable open-source-based options beyond popular commercial services. In practice, that means many authorities are hosting non-European open models-Meta Llama, Google Gemma, and entrants like DeepSeek-on internal infrastructure. This improves the ability to switch models if needed.
But there's a strategic gap: Europe still lacks widely adopted, openly provided LLMs under its own governance. For long-term sovereignty, that needs attention at policy and funding levels.
Security services: tight-lipped for a reason
Intelligence and defense authorities provide no details. Disclosing AI methods together with data sources could expose capabilities or invite data poisoning attempts. That silence signals that AI is already part of "hard" security work.
The tension remains: the areas most sensitive to fundamental rights have the least transparency. Parliamentary oversight will need new mechanisms that protect capabilities without losing accountability.
Where AI is making a difference
- Forensics and humanitarian aid: BIKO-UA supports identification of war victims in Ukraine with image recognition.
- Migration analysis: BAMF applies models to assess movements and inform planning.
- Search and rescue: KIResQ evaluates thermal images to find missing persons faster; Silva uses AI-enabled drones/aircraft to spot forest fires early.
- Environmental monitoring: BfG detects plastic in rivers and oil at sea; the Transport Ministry leans heavily on AI for broader monitoring tasks.
- Weather and extremes: DWD is building an AI Center for improved forecasting and high-precision nowcasting to protect against extreme events.
- Disinformation defense: FACTSBot identifies and validates machine-generated content; Nebula focuses on fake news recognition; SpeechTrust+ targets AI-driven voice manipulation and fraud.
Risks you should plan for
- Bias and discrimination: Models can replicate skewed training data-women and people with a migration background can be disadvantaged if safeguards are weak.
- Energy and cost: Training and serving larger models consume serious electricity. CO₂ disclosures and energy efficiency should be procurement criteria.
- Data poisoning and leakage: Training with real data increases exposure. Without hygiene and monitoring, results can be manipulated or sensitive data can leak.
- Vendor lock-in: Even with open models, contracts, APIs, and embeddings can trap you. Build for portability.
Policy and governance signals to track
- EU AI Act overview for risk-based controls, documentation, and enforcement timelines.
- NIST AI Risk Management Framework for practical controls, evaluation, and continuous monitoring.
What leaders in government can implement this quarter
- Stand up an AI inventory: Register use cases in MaKI and maintain an internal catalog with owner, purpose, data sources, retention, evaluation metrics, and human-in-the-loop steps.
- Adopt a baseline AI policy: Approval workflow for new use cases, human review for impactful decisions, privacy constraints, logging, and incident response for model failures.
- Procurement guardrails: Require model cards, security claims, bias testing results, and CO₂/energy disclosures. Add switching clauses, data portability, and API export. Favor open-weight models where feasible.
- Risk and quality assurance: Run algorithmic impact assessments, DPIAs, red-teaming, and bias/accuracy benchmarks before production. Monitor drift and error rates continuously.
- Security by default: Keep sensitive data inside Kipitz or equivalent internal endpoints. Prohibit unmanaged uploads to external tools. Add poisoning detection, content filtering, and access controls.
- Capability building: Train staff on prompts, reviews, and safeguards. For structured programs across roles, see Courses by Job or Popular Certifications.
- Measure outcomes: Track cycle-time reduction, accuracy, citizen satisfaction, complaints, and energy per task. Review quarterly and retire low-value use cases.
Bottom line
AI has blended into the administrative bloodstream. The priority now isn't more pilots-it's shared platforms, clear rules, measurable outcomes, and practical safeguards. With MaKI and Kipitz, the infrastructure is taking shape. The next step is disciplined governance and skills so every deployment holds up under scrutiny.
Your membership also unlocks: