Beyond Checklists: Everyday Ethics in Public-Sector AI
AI can discriminate and err; ethics must be built into daily decisions, not a checkbox. Figueras shows how teams make trade-offs and need support, accountability, and transparency.

AI isn't always fair - and the fix is everyday ethical work
Machines don't think like people. Handing them high-stakes decisions without guardrails is risky. AI can discriminate, hallucinate, and make mistakes - and it won't feel bad about it. ClΓ udia Figueras's research shows a practical way forward: treat ethics as daily work, not a checkbox.
Ethics is built into every AI decision
AI is often presented as neutral and efficient. It isn't. Every system encodes choices: who gets access, which data counts as "normal," how errors are handled, and what happens when there's no clear right answer. Those choices have social consequences.
"Think about an AI system that allocates financial support. People with atypical life situations can be penalized because their data doesn't fit the training patterns. That's not just a bug - it impacts rights and well-being," says Figueras.
When ethics fail, people pay the price
Look at the UK's A-level grading algorithm in 2020. Exams were cancelled; an algorithm assigned grades. The goal was to prevent inflation. The result: many high-performing students from lower-income schools were downgraded, amplifying inequality and triggering national backlash. The lesson is clear: stakes are high when public-sector AI gets ethics wrong.
BBC coverage of the A-level algorithm controversy
What Figueras found in Swedish public organisations
Ethics isn't an add-on at the end. It's embedded in daily workflows. Practitioners constantly weigh trade-offs - efficiency vs. fairness, performance vs. transparency, speed vs. due process. This ethical labour is often invisible, but essential.
Ethical dilemmas aren't just "problems to fix." They spark needed reflection: when a system promises efficiency but erodes care or fairness, teams must pause and renegotiate priorities. Responsibility also shifts over time - between individuals, teams, and institutions - and is enacted in practice, not just on org charts.
Implications for leaders, teams, and policy
Checklists and high-level frameworks aren't enough. Organisations need time, space, and support for people to question assumptions, surface value conflicts, and disagree productively. Leadership must value ethics alongside technical performance and resource it: training, forums, documentation, and ongoing review.
Accountability should be a shared, continuous process across the AI lifecycle - not a hunt for a single person to blame. This approach can inform smarter regulation and build public trust.
A practical playbook for responsible AI in the public sector
- Make ethics operational: add explicit ethics checkpoints to sprints, procurement, and deployment. Budget time for them.
- Document trade-offs: record why decisions were made (e.g., fairness vs. accuracy) and who was involved. Update as systems evolve.
- Build multidisciplinary forums: include legal, social science, domain experts, and citizen perspectives - not just engineers.
- Govern data quality: audit representation, measure disparate impact, and fix skew through sampling, reweighting, or policy changes.
- Design for errors: define appeal routes, human-in-the-loop overrides, and redress mechanisms before deployment.
- Explain decisions: publish accessible model and data cards, assumptions, and known limits. Avoid overclaiming.
- Map responsibility: clarify decision rights and escalation paths across teams and vendors. Revisit after each release.
- Invest in capability: upskill practitioners and leaders on ethics, policy, and socio-technical risk - not just model tuning.
"Awareness is growing, and many practitioners act responsibly," says Figueras. "But pressure to adopt AI fast can eclipse deeper questions. As one interviewee asked: 'Should we even be doing this at all?' The challenge is turning awareness into durable practices that outlast the hype."
Why this work matters to Figueras
Before her PhD, Figueras worked in data annotation. The labour was invisible compared to engineering work, yet critical. During testing she noticed the system failed on darker skin tones - the model had been trained mostly on light-skinned faces. That moment exposed how fragile AI can be, shaped by choices and bias.
Research and advocacy by scholars such as Joy Buolamwini and Timnit Gebru reinforced the urgency. For practitioners, the takeaway is simple: hidden work determines public outcomes. Make it visible. Resource it.
What's next
Figueras is preparing to defend her PhD and aims to keep working between research and practice - helping organisations and policymakers turn ethical principles into action, whether in academia or applied roles.
Research details
- PhD defence: October 9, 2025, Department of Computer and Systems Sciences (DSV), Stockholm University
- Thesis: "Ethical Tensions in AI-Based Systems" (available via DiVA)
- External reviewer: Christopher Frauenberger, Interdisciplinary Transformation University, Austria
- Main supervisor: Chiara Rossitto (DSV); Supervisor: Teresa Cerratto-Pargman (DSV)
For teams building capability
If you're formalising responsible AI skills across roles, explore practical learning paths by job function: Courses by job.