Why proper AI governance will be vital for public-sector workplaces in 2026
AI use at work has surged. In Ireland, reported usage jumped from 19% in August 2024 to 40% by July 2025. That shift has pushed AI from pilots into day-to-day operations-where risk, accountability, and public trust live.
Barry Haycock, senior manager of data analytics and AI at BearingPoint, says the move is clear: from "experimentation to operational use." Copilots and agents are standard, but the real lift now includes contract review, compliance checks, bulk document processing, and enterprise search.
Where government is headed: from pilots to "AI factories"
Large bodies are building repeatable AI pipelines-"AI factories"-to process records, classify cases, and surface insights for policy teams. Augmented analytics lets non-technical staff query data without a queue of analysts. It saves time. It also raises the bar for governance.
Haycock is blunt: "Sustainable value" depends on governance, data maturity, and workforce capability. "Without governance and measurable outcomes, pilots stall."
Accessory, not autonomous
AI should accelerate workflows, not run them. Rosie Bowser of BearingPoint warns against tool-first thinking: "Starting with the tool is not unlike painting over a structural crack." Define the job to be done, then pick the tool.
Both experts see work being reshaped, not erased. "The real risk is failing to reskill and adapt," says Haycock. Bowser adds that without upskilling, staff may struggle to operate "safely and confidently within AI-enabled processes."
Her line that should sit on every steering group deck: treat AI as a workflow accelerator "rather than an autonomous decision-maker." Keep humans in charge. Own decisions. Log them.
Governance in advance: proof over pilots
2026 raises the compliance floor. With the EU AI Act in force and Ireland's policy direction set by the National AI Strategy, public bodies must show documentation, transparency, and auditability-not promises.
Haycock's guidance fits the public mandate: align AI to clear use cases, assign risk ownership, and secure executive sponsorship. Oversight should be proportionate to risk and embedded in operations. Scalable governance is the differentiator.
A practical public-sector AI governance playbook
- Start with the problem: Define the workflow, outcome, and success metrics before picking a tool. Kill use cases that lack measurable value.
- Risk-tier every use case: Classify by impact on rights, safety, and service outcomes. Require stricter controls for higher risk.
- Keep an AI register: Track systems, models, versions, data sources, owners, purposes, and legal bases.
- Impact assessments before go-live: Complete privacy and fundamental-rights assessments; document intended purpose, foreseeable misuse, and mitigations.
- Human oversight by design: Define final decision authority, escalation paths, appeal routes, and non-AI fallbacks. No orphaned decisions.
- Data governance that is real: Lineage, quality checks, retention limits, minimisation, and clear data owners. No ambiguous datasets.
- Explainability and traceability: Require model cards, feature transparency where feasible, and logs for prompts, outputs, and decisions.
- Security from day one: Isolation for sensitive data, secrets management, red-teaming, and vendor security reviews.
- Procurement with teeth: Contract for testing rights, bias reports, uptime, incident response, data return/deletion, and audit access.
- Accessibility and fairness: Test outputs for bias and accessibility. Keep a clear route to human help for vulnerable users.
- Records and FOI readiness: Decide what to keep, for how long, and how to retrieve it. AI outputs are records if they inform decisions.
- Role-based training: Give operators, approvers, and auditors the specific skills they need; publish do/don't examples.
- Measure outcomes and risk: Track accuracy, turnaround time, error rates, user satisfaction, and incidents. Review quarterly.
Stop "shadow AI" before it starts
Bowser flags a common gap: policy exists, but staff don't know where it is. That gap breeds shadow practices and risk. Make the safe path the easy path.
- Publish approved tools, red-lines (no personal/sensitive data), and simple checklists in one place.
- Offer fast lanes: pre-vetted prompts, templates, and datasets for common tasks like drafting, summarising, and triage.
- Require AI use to be declared in case notes when it influences a decision. Build this into forms, not memory.
What "good" looks like in 2026
Haycock: "Data governance and model explainability are being understood as enablers more and more." Security and regulatory exposure must be addressed early, not at the tail end of a project.
Bowser's test for practical governance: clear data rules, audit trails, sensible fallback steps, and knowing what the model is actually doing-without friction for frontline teams.
Upskill the public workforce
Reskilling beats replacement. Equip policy, legal, procurement, and service teams with the basics of AI risk, prompts, review, and oversight. Then go deeper for operators and approvers.
For structured development, see the AI Learning Path for Policy Makers. For sector-specific insights, explore AI for Government.
90-day action plan for departments
- Days 0-30: Inventory all AI usage. Stand up an AI register. Freeze high-risk shadow uses.
- Days 31-60: Risk-tier use cases. Run impact assessments on medium/high risk. Define human-oversight points and fallbacks.
- Days 61-90: Implement logging, access controls, and model documentation. Train operators and approvers. Set quarterly review dates and metrics.
The bottom line
AI can speed government work and improve service quality. It will also magnify weak data, unclear ownership, and thin controls. Build governance that people can use, prove it in audits, and keep humans accountable for final decisions. That's how you get value without losing trust.
Your membership also unlocks: