Government response under scrutiny as Grok AI accused of enabling abusive images on X
Ministers are facing sharp criticism over a muted and confusing response to reports that Grok, the AI tool embedded in X, has been used to generate abusive images - including child sexual abuse material. The scale is alarming: one study cited suggests up to 6,700 sexually suggestive or "nudified" images per hour, with hundreds of thousands already circulating.
For officials, this isn't a tech story. It's a criminal risk, a regulatory test, and a victim protection challenge happening in real time.
What happened
Social Democrats leader Holly Cairns pressed the Taoiseach during Leaders' Questions, highlighting that Grok's image-editing features made it "easy to undress people in photographs," with no age restrictions. "This isn't just horrifying and shocking - it is illegal under the Child Trafficking and Pornography Act," she said, asking why the State's response has been "muted and confusing" and where criminal investigations stand.
She argued that the platform facilitated the production and distribution of abusive content and questioned why it was being treated with "kid gloves."
Government stance so far
The Taoiseach called the situation "outrageous, shocking," stating the Government will act. He said the Attorney General is advising, relevant ministers will meet, and existing law - alongside the EU AI Act, which may not cover every aspect - will be examined for gaps and addressed quickly.
He added he was not aware of a specific Garda investigation at this time, while noting ministers are not briefed on every operational matter.
Minister James Lawless described Grok as "part of a publication tool" rather than the primary tool, calling the emergence "unexpected" and stressing that legislators worldwide are struggling with similar issues. He said the EU AI Act presents an "opportunity" and that lawmakers need to "get on top of this very quickly."
Legal tools you can use now
- Criminal law: Production, possession, and distribution of child sexual abuse material is illegal under the Child Trafficking and Pornography Act. See the statute text for scope and offences: Irish Statute Book.
- EU Digital Services Act (DSA): X is designated a Very Large Online Platform with obligations on risk assessment, mitigation, and systemic abuse prevention. Non-compliance can trigger investigations and penalties by the European Commission. Overview: European Commission - DSA.
- EU AI Act (incoming): The Act will set requirements for AI systems and certain high-risk uses. Officials should assess where generative image-editing features fall and what duties apply once timelines take effect.
Immediate actions for departments and agencies
- Coordinate enforcement between Justice, Children/Equality, and Communications portfolios; align with the Attorney General on charging pathways for production and distribution of illegal images.
- Request data preservation from X on prompts, outputs, and account identifiers linked to suspected offences to support potential prosecutions.
- Engage Gardaí on referral thresholds, victim identification workflows, and cross-border evidence handling with Europol counterparts.
- Escalate under the DSA where systemic risks are suspected (e.g., inadequate age gating, inadequate detection, or lax moderation of synthetic CSAM).
- Direct platform engagement with clear questions and deadlines (see below), and document responses for regulatory follow-up.
- Victim support protocols: ensure channels for reporting, rapid takedown, evidence capture, and survivor services are clearly signposted and resourced.
Policy and regulatory work over the next 60-90 days
- Close statutory gaps on synthetic sexual abuse content and deepfake-enabled offences if current law doesn't explicitly cover AI-edited outputs.
- Online safety codes: align with Coimisiún na Meán on interim expectations for image-editing features, age assurance, and abuse-prevention defaults.
- DSA coordination: share evidence with the European Commission on systemic risk findings; consider joint actions with other Member States.
- Procurement and funding guardrails so State-backed tools cannot be used to create or spread abusive imagery.
- Clear penalties and timelines for non-compliance communicated publicly to increase deterrence.
Questions X and Grok must answer, in writing
- What age checks and image-editing restrictions are in place, and when were they introduced?
- What filters detect and block sexualised edits and CSAM (including synthetic), and how effective are they?
- Are prompts/outputs logged for abuse detection, and what are the retention and access controls?
- How many moderators and specialised analysts review flagged content tied to Grok image edits?
- What is the incident response process, escalation timeline, and reporting pathway to law enforcement and child protection bodies?
- What proactive measures will be shipped within 14 and 30 days to reduce abuse creation and spread?
What to watch next
- Attorney General's advice and any legislative amendments proposed.
- Whether Gardaí open a specific investigation tied to Grok-generated images.
- Any DSA-related inquiries or actions by the European Commission regarding X's systemic risk controls.
- How the EU AI Act's rollout intersects with generative image-editing and deepfake safeguards.
Bottom line for public officials
This is a live safety and enforcement issue. Use existing criminal law and the DSA now, tighten rules where needed, and force clear commitments from platforms on prevention, detection, and cooperation with law enforcement.
Capacity building
If your team needs a fast primer on AI oversight and risk mitigation for public-sector roles, see these AI upskilling options by job function.
Your membership also unlocks: