Can the EU take on Musk over Grok's AI "undressing" feature?
Europe is testing how far its toughest tech laws can go. After X's AI chatbot Grok generated roughly three million sexualized images in under two weeks - including tens of thousands that appear to depict children - the European Commission opened a formal probe under the Digital Services Act (DSA).
This isn't just about one feature gone wrong. It's a live test of whether Brussels can force a major US platform to rein in AI-fueled harm at scale.
What happened
Grok's "undressing" function let users digitally strip women and children of clothing. Swedish Deputy Prime Minister Ebba Busch publicly said she was targeted, underscoring how fast these tools can cross from novelty to abuse.
The velocity is the headline: millions of images in days, spread across a platform with global reach. That volume creates systemic risk - exactly the kind the DSA was built to address.
What the DSA allows
The DSA treats large platforms more like regulated products than neutral pipes. If a service creates systemic risks - for example, by enabling mass violations of privacy or child safety - the Commission can force changes, issue fines, or, for repeat and severe failures, suspend service in the EU.
Penalties can reach 6% of global turnover. X has already faced a DSA fine of €120 million for transparency failures. In extreme cases, the Commission can move to block access in the EU - a nuclear option, but it's on the table.
Another advantage for Brussels: market leverage. Access to hundreds of millions of consumers gives regulators real influence over platform behavior.
The politics behind enforcement
Experts welcome the probe but question follow-through. Some point to reported delays tied to broader EU-US trade tensions, a reminder that tech enforcement often collides with geopolitics.
Members of the European Parliament have pressed for stronger action and higher fines, arguing that current levels may not sway a company led by one of the wealthiest individuals on the planet.
What to watch next
There's no firm timeline. Expect months, not weeks. A significant fine looks more likely than an immediate blockade, but that could change if violations continue or risk mitigation remains weak.
The key test is whether the Commission compels tangible fixes: feature rollbacks, stronger child-safety controls, stricter user protections, and transparent reporting - not just a penalty payment.
Why this matters for government, IT, and development teams
AI features ship quickly. Compliance, safety, and abuse prevention often lag. This case spotlights what needs to be in place before launch, not after outrage.
Action checklist
- Risk assessments before launch: Document abuse scenarios (e.g., sexualized deepfakes, child exploitation), test at scale, and gate risky features by region and age.
- Safety by default: Block synthetic nudity and sexualized outputs, especially involving minors. Use multilayer filters (prompt, model, and output layers) and human review for edge cases.
- Geo-aware rollouts: If compliance is uncertain, disable high-risk features in the EU until DSA obligations are met. Build a kill switch for immediate shutdown.
- Detection and labeling: Add visible signals for AI-generated media, hash known harmful content, and partner with child-safety databases where lawful.
- User controls and redress: Make reporting one click away, triage at speed, and notify victims with removal and escalation options.
- Governance and audit: Maintain detailed logs, publish systemic risk reports, and enable third-party audits. Align legal, trust & safety, and engineering.
- Vendor clauses: Require suppliers to meet DSA-grade safeguards, disclose model behaviors, and support rapid takedown APIs.
For public bodies and regulated sectors
Update your acceptable-use policies to address AI image tools explicitly. Require platforms and vendors to demonstrate DSA risk mitigation for any generative feature that touches user content.
Consider network-level blocks on specific high-risk endpoints until providers prove compliant controls. Train staff on synthetic media risks and reporting pathways.
The bigger precedent
If the Commission forces concrete fixes, every large platform will take note - from AI image tools to multimodal assistants. If the case ends with a modest fine and little structural change, expect copycat features to proliferate.
Either way, the message is clear: AI features with high abuse potential will be treated like high-liability products in the EU.
Further reading
Build internal capability
If you're standing up AI governance, safety reviews, or compliance training for product teams, make it practical and role-specific. Start with a short program, then scale with playbooks and audits.
Your membership also unlocks: