UK Law Criminalises AI-Generated Intimate Images Without Consent
The UK has criminalised the creation and request of fake intimate images using AI. From 6 February 2026, Section 138 of the Data (Use and Access) Act 2025 makes it illegal to generate or solicit non-consensual deepfake nudes.
The law amends the Sexual Offences Act 2003 to include AI-generated and digitally manipulated intimate images. The offence captures both the creation itself and the request to create - even if the image is never produced.
This matters to legal professionals because the legislation signals a shift in how law treats technology-enabled harm. The harm is recognised before distribution occurs.
Four-Pillar Enforcement Framework
The government is building enforcement around prevention, detection, enforcement and takedown.
- Prevention: Measures are being explored to criminalise "nudification" apps that remove clothing from images.
- Detection: Technology companies must proactively identify non-consensual intimate images. The government is consulting on whether AI-generated content must carry visible labels to distinguish deepfakes from authentic images.
- Enforcement: Creating or sharing non-consensual intimate images becomes a "priority offence" - treated with the same seriousness as child abuse or terrorism. Ofcom can fine technology companies up to 10 per cent of global turnover for failing to detect these images or mitigate algorithmic harms. Senior managers face criminal liability for failure to remove illegal content.
- Takedown: A proposed amendment to the Crime and Policing Bill requires technology companies to remove content within 48 hours of notification. Digital fingerprinting technology must prevent re-uploading.
Platform Obligations Expand
The definition of "platform" is broad. It covers user-to-user services including social media, messaging apps, video-sharing platforms and online gaming.
Companies must remove flagged content quickly. The government says this means individuals report once and receive protection across platforms, rather than chasing takedowns separately.
What This Means for Your Practice
Legal professionals should understand the scope of these obligations. The consultation on online safety covers content moderation, algorithmic risk and labelling requirements for synthetic media.
Senior managers at technology companies now face personal criminal liability. Compliance failures can trigger substantial fines. This creates new exposure for in-house counsel and compliance teams.
The 48-hour takedown requirement is a hard deadline with enforcement teeth. Companies must have systems in place to meet it.
For further context on how AI regulation is reshaping legal practice, see AI for Legal and Generative AI and LLM.
Your membership also unlocks: