Government rules out ban on generative AI tools after public petition
Ministers have confirmed there are no plans to ban AI tools that generate images, video or audio, despite a public petition calling for a full prohibition across social media and television.
As of Monday, February 16, the petition had 11,941 signatures. It argues AI-generated content fuels misinformation, enables harassment and threatens jobs in creative sectors.
What the government said
The Department for Science, Innovation and Technology responded that the priority is safe use, not prohibition. "The Government does not have any plans to legislate to ban the AI generation of images, videos and audio," the statement said, citing the benefits to daily life, public services and the economy.
Officials acknowledged real risks: misinformation, harmful content and disruption to the labour market. They stressed a willingness to legislate where existing laws fall short, noting new offences covering the creation of non-consensual intimate images and an intention to ban AI "nudification" tools through the Crime and Policing Bill.
The statement also recognised concerns from the UK's creative industries and pledged to protect human creativity while enabling innovation. On jobs, the government said AI should "drive opportunity, not insecurity," and confirmed a specialist unit has been set up to monitor AI's impact on employment and shape policy responses.
Why this matters for public bodies
A ban is off the table, which means demand for AI-enabled services will continue to grow. The pressure shifts to strong governance, targeted enforcement, and practical support for teams on the front line.
For those in policy, operations, communications, legal, HR and procurement, the message is clear: use is permitted, risk must be managed, and harmful use will face tighter rules.
Immediate actions for government teams
- Policy and guidance: Update internal AI use policies to reflect "allow with safeguards." Define approved tools, required approvals, red lines (e.g., synthetic nudity, impersonation), and escalation paths.
- Risk management: Add AI-specific entries to risk registers covering misinformation, sensitive data exposure, deepfakes, and procurement risks. Assign clear owners and review cycles.
- Legal and enforcement: Map current offences and pending measures (e.g., non-consensual intimate images, planned nudification ban) to your incident response processes and training.
- Communications: Prepare protocols for suspected deepfakes affecting your department or ministers. Include rapid verification, media holding lines, and takedown/escalation routes with platforms.
- Procurement and assurance: Require vendors to detail AI features, safety controls, audit logs, content provenance/signals, and compliance with UK law. Build these into contract clauses and SLAs.
- Data protection: Run DPIAs for AI uses that touch personal data or generate public-facing content. Document lawful bases, minimisation, retention and redress mechanisms.
- Workforce planning: Engage HR and unions early. Track roles most exposed to change, plan retraining pathways, and set expectations on augmentation vs. replacement.
- Incident handling: Stand up a cross-functional triage flow for harmful synthetic content (reporting, evidence capture, legal review, platform escalation, comms response).
Policy signals to watch
- Targeted offences: Non-consensual intimate imagery offences are in place; a ban on AI nudification tools is planned through the Crime and Policing Bill. Expect more focused rules over blanket bans.
- Safety standards: Departments should anticipate guidance on provenance, watermarking signals and auditability for generative outputs used in public services.
- Labour market oversight: The specialist unit monitoring jobs will inform future interventions. Keep workforce data and impact assessments current.
Practical next steps (90-day plan)
- 30 days: Inventory AI use across teams; classify risk; freeze high-risk unapproved use. Create a single, short AI policy addendum for staff.
- 60 days: Pilot detection/provenance checks in comms workflows; embed AI risk questions into procurement templates; train incident managers on deepfake scenarios.
- 90 days: Run a red-team exercise on misinformation and synthetic media; brief senior leaders; publish a staff-facing FAQ with do/don't examples.
For background on the government's approach to AI regulation and safety, see the AI regulation white paper on GOV.UK.
Upskilling your team
If your department is planning structured training for analysts, comms, policy or operations staff, curated course paths by role can accelerate adoption while reducing risk. Explore options here: Complete AI Training - courses by job.
The takeaway: AI-generated content will remain legal to create and use, but the guardrails are tightening. Align your policies, contracts and workforce plans now, so you can use the tech responsibly and respond fast when things go wrong.
Your membership also unlocks: