Holding AI to Account: Inside Ireland's AIAL and a PhD Student's Fight to Put People Before Big Tech

AIAL is giving AI accountability teeth with victim-centered audits and real penalties. Their work moves policy from PR to evidence, jurisdiction, and consequences.

Categorized in: AI News Legal
Published on: Mar 09, 2026
Holding AI to Account: Inside Ireland's AIAL and a PhD Student's Fight to Put People Before Big Tech

AI accountability with teeth: inside AIAL's push for enforceable justice

The Artificial Intelligence Accountability Lab (AIAL) launched out of Trinity College Dublin's ADAPT Research Ireland Centre in November 2024. Founded and led by Professor Abeba Birhane, the group focuses on preventing psychological harms from AI, especially for marginalized communities. Their goal is clear: detect destructive design patterns, inform policy, and make accountability stick.

Funding that signals intent

In June 2025, AIAL received €199,978 from the European AI & Society Fund to build a justice-oriented audit framework-one of fifteen grantees selected from 325 applicants. The award sits within the Fund's "Making Regulations Work" programme, a €4 million effort backing organizations that advance AI accountability and social justice in the implementation of the European AI Act. For context on the fund's mission, see the European AI & Society Fund, and for the policy backdrop, the EU's AI Act portal.

More recently, AIAL also secured support from the UK government's AI Security Institute (AISI) within the Department for Science, Innovation and Technology. This project examines how AI companions may harm mental health through interface choices, emotional dependency loops, and questionable data practices. The output aims to provide policymakers with evidence strong enough to guide concrete guardrails.

What "justice-oriented" means in practice

PhD researcher Nana Nwachukwu's work centers on algorithmic governance, justice-oriented evaluation of socio-technical systems, and AI ecosystem audits. She's blunt about the core problem: "One of the challenges in designing a justice-oriented framework is defining what constitutes justice, because it is highly subjective. More often than not, we have left justice to be defined by people who make the system, and that's not okay." Her position: civil society should define what is safe and just.

When safety rails fail: nudification at scale

While researching media capture, Nwachukwu documented dozens of cases where Grok, an AI chatbot developed by Elon Musk, enabled the creation of nearly-nude images of real people. To force attention on the harm, she published a dataset of over five hundred nudification instances. The result: regulators in the UK, Malaysia, Indonesia, and Australia acknowledged the need for intervention. "I didn't expect for it to catch on fire," she said, "but I am glad to see regulatory responses."

She has also argued publicly that current safeguards and law fall short. Referencing a Guardian piece about her findings, she pointed to gaps in measures meant to police generative AI. "There is a huge gap in the law intended to regulate generative AI, even in the EU. The Digital Services Act is supposed to police this type of harm, but it falls short of properly capturing the damage experienced by victims."

Accountability needs consequences-otherwise it's theater

Policy without enforcement is posture. Nwachukwu notes that accountability depends on consequences when procedures are ignored, calling out the non-mandatory nature of GPAI code in the EU. Voluntary commitments don't move repeat offenders. Measurable penalties do.

How to evaluate if the law actually works

Nwachukwu's checklist for legal effectiveness is refreshingly concrete:

  • How well did existing laws regulate platform providers in practice?
  • Were penalties commensurate with harms and institutional capacity?
  • Were victims compensated, and how quickly were payments made?
  • Did fines reduce repeat violations, or did offenders treat them as a cost of doing business?

She adds a crucial piece that's often ignored: victim feedback. "What form of justice did they actually receive as a result? Do they believe the fines were sufficient retribution? If their data leaked, for instance, did they feel safer after Google was fined for a privacy breach?" Without this, we risk assuming justice is done-a default that benefits Big Tech.

Evidence costs money-public interest work needs public funding

Collecting evidence of harm is expensive and slow. "Public interest researchers are often financially limited to short-term projects of up to five years, but this type of work requires long-term documentation, knowledge management and the identification of victims, incidents, witnesses," Nwachukwu says. The solution she proposes: government-backed, collaborative research with the scope to curate, verify, and maintain longitudinal datasets.

Ireland's leverage point

Why Ireland matters: "Unlike other EU countries, Ireland hosts the largest number of tech hubs in the EU. This means that Ireland can directly administer accountability to these companies through its national laws, even beyond EU-level regulation, because these corporations are physically headquartered here." For practitioners, that's a venue and jurisdiction strategy hiding in plain sight.

Make power visible: the authority awareness framework

Nwachukwu proposes an "authority awareness framework" so anyone on a platform can see how power is distributed-who sets the rules, who profits, who is liable. That visibility enables users to negotiate their participation.

Her example is simple: terms and conditions. "Generally, users must accept all policies to access digital tools. But why can't we accept only certain provisions? Why can't we negotiate how or for how long we engage?" Even partial modularity would shift leverage back to the user-and surface what platforms don't want to negotiate.

Children and AI: draw a hard line

Nwachukwu is clear: "I do not believe kids should encounter AI systems in any form before the age of thirteen, and the interaction should occur only under proper supervision until they are sixteen." She doesn't dismiss AI outright; the design and governance determine outcomes. As a model, she points to Swiss AI initiatives that are public-focused and community-built.

Action guide for legal teams

  • Demand verifiable audits: push for justice-oriented audits that include lived-experience testers, not just vendor self-assessments.
  • Evidence pipelines: preserve session logs, prompts, outputs, and UI versions to establish causality between design choices and harm.
  • Victim-centered remedies: include restitution plans, takedown SLAs, and post-incident support in settlement terms.
  • Enforcement first: prioritize cases where penalties deter repeat offenses; condition reduced fines on verified remediation and re-audit.
  • Jurisdictional leverage: use Ireland's locus for major platforms to pursue faster supervisory action where applicable.
  • Contract modularity: advocate for granular consent and time-bound data use, with default off for sensitive inferences and synthetic sexual content filters.
  • Children's safeguards: enforce age gates, supervised modes, and zero data profiling for minors; treat violations as aggravated.
  • Public funding advocacy: support budget lines for independent monitoring, longitudinal harm databases, and cross-border casework.

For training that aligns legal practice with AI audits, accountability, and compliance workflows, see AI for Legal.

The bottom line

Accountability isn't a press release. It's the mix of measurable audits, enforceable penalties, and remedies victims recognize as real. AIAL's work-and Nwachukwu's stance-pushes the conversation where it belongs: into evidence, jurisdiction, and consequences. As counsel, your leverage is strongest when you pair proof of harm with forums that can actually bite.

Her closing reminder is hard to ignore: "Negotiating our existence on the Internet is something we should do. If we don't, then we are conceding authority completely to Big Tech - and this means losing any meaningful ability of regulating it."


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)