Ireland urged to use EU presidency to ban AI tools for non-consensual intimate images as X limits Grok

Ireland's AI council urges using its 2026 EU presidency to ban AI that generates intimate images and CSAM. They want clear victim guidance and stronger platform rules.

Categorized in: AI News Government
Published on: Jan 17, 2026
Ireland urged to use EU presidency to ban AI tools for non-consensual intimate images as X limits Grok

Ireland urged to use EU presidency to push ban on AI tools that generate intimate images

Ireland's State Artificial Intelligence Advisory Council has called on the government to use its upcoming EU Council presidency (July 1-Dec. 31, 2026) to pursue an EU-wide ban on AI systems that generate intimate images and child sexual abuse material.

The council warns that high-velocity, automated abuse tied to models like X's Grok is set to become more common. They also want immediate guidance for victims on reporting and preserving evidence.

Why this matters for government leaders

Non-consensual, manipulated images spread fast, cross borders, and overwhelm current response systems. Without clear rules and strong enforcement, victims are left with slow takedowns and patchy remedies.

Public trust in AI hinges on how quickly states can deter abuse, set obligations for platforms and model providers, and give law enforcement workable tools.

What the advisory council is asking for

  • EU-wide prohibition on AI practices that generate intimate images and child sexual abuse material.
  • Victim guidance explaining how to report suspected crimes and preserve evidence for investigations.

Context: scrutiny of Grok and platform controls

Governments have stepped up scrutiny of xAI's Grok after tests showed it could remove clothing from images or produce sexualized content without consent.

On Thursday, X said Grok will no longer allow users to manipulate photos of people to appear in revealing clothing in places where such actions are illegal. This is a step, but it is geofenced and narrow; broader safeguards and legal clarity are still needed.

Policy levers available during the EU presidency

  • Legislative pathway: Table Council conclusions calling for a prohibition on generative AI practices that produce intimate images and child sexual abuse material, with clear definitions and scope. Explore amendments or complementary instruments aligned with the EU AI framework.
  • DSA enforcement: Use the Digital Services Act to press very large platforms and search providers to assess systemic risks, implement preventive guardrails, and speed up removals for image-based sexual abuse.
  • Codes and standards: Convene industry and standards bodies to advance watermarking, provenance (e.g., C2PA), and default filters that block prompts for sexualized image manipulation.
  • Cross-border operations: Strengthen channels with Europol and national cyber units for rapid preservation orders and evidence transfer.

Immediate actions for departments (before July 2026)

  • Victim pathway: Publish a simple, public guide on reporting, evidence capture (original files, metadata, hashes), and supports. Host on a .gov domain with 24/7 contact points.
  • Single reporting intake: Stand up a one-stop form that routes to police, data protection authority, and platform notices simultaneously.
  • Procurement guardrails: Require model and tool vendors to block sexualized manipulation of real persons by default, log attempted abuses, and provide trusted-flagger interfaces.
  • Law enforcement capability: Fund training on open-source AI forensics, hashing, and legal process for swift takedowns.
  • Schools and public sector: Issue policy templates covering staff use of generative AI, with zero tolerance for image-based abuse and clear disciplinary routes.

Drafting the prohibition: practical elements

  • Scope: Cover generation or manipulation of images depicting real persons into intimate/sexualized content without consent, and any child-related content regardless of consent.
  • Duties: Model providers and platforms must prevent such outputs, detect and block known abuse patterns, and share indicators with competent authorities.
  • Transparency: Require clear user notices, audit logs, and reporting on blocked attempts and response times.
  • Due process: Provide appeal channels and independent oversight to minimize over-blocking and protect lawful expression.

Risks to anticipate

  • Over-blocking: Mitigate with precise definitions, consent handling, and auditable filters.
  • Workarounds and model leaks: Pair legal bans with distribution controls, watermarking, and takedown cooperation across hosting services.
  • Privacy and data access: Limit data sharing to what is necessary, with warrants or statutory bases, and strong safeguards.

What success by Dec. 31, 2026 looks like

  • Council conclusions adopted and a concrete legislative text in motion.
  • Platforms deploy default blocks on intimate-image generation/manipulation and publish quarterly risk reports.
  • Member states run a unified reporting portal and meet response-time targets for takedowns and victim support.

Useful links

If your department is updating AI literacy and risk training for staff who handle online harms, see our curated programs by role: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide