MPs demand swift Ofcom action over Grok AI exploitative images of women and children

MPs warn Grok AI is being misused to create sexualised images, urging Ofcom to act. Schools should tighten safeguards now-update policies, reporting routes, and staff training.

Categorized in: AI News Education
Published on: Jan 14, 2026
MPs demand swift Ofcom action over Grok AI exploitative images of women and children

MPs raise alarm over Grok AI images: what educators need to know

On 13 January 2026, MPs voiced deep concern over reports of the Grok AI chatbot being used to generate undressed and sexualised images of women and children. The Education Committee Chair has written to Ofcom Chief Executive Melanie Dawes, warning that continuing the service on a paid basis rather than suspending it could amount to intimate image abuse and the creation of illegal content.

The Committee called the reports unacceptable and backed Ofcom's investigation, stressing the serious harm caused by the sexualisation and exploitation of women and children online. They urged regulators to act decisively and ensure platforms operating in the UK put strong, proactive safeguards in place.

Why this matters for schools and colleges

What happens online spills into classrooms, corridors, and communities. Deepfake tools and image-generation models can be used to harass, coerce, or shame students and staff, increasing safeguarding workload and risk.

The threat profile includes non-consensual imagery, sextortion, peer-on-peer abuse, and the viral spread of harmful content. Your policies and responses need to explicitly cover AI-generated imagery, not just traditional media.

Immediate steps for education leaders

  • Update safeguarding and child protection policies to cover AI-generated imagery, deepfakes, and image-based abuse. Align with KCSIE and your local procedures.
  • Set clear reporting routes for image-based abuse: Designated Safeguarding Lead (DSL), police where appropriate, CEOP for child protection, and platform takedown requests. Preserve evidence securely.
  • Audit devices, filters, and monitoring. Block known risky services and review BYOD rules to reduce exposure on personal devices used on site.
  • Brief all staff on indicators of image-based abuse, how to respond without victim-blaming, and when to escalate to external agencies.
  • Deliver student education on consent, digital footprints, and what to do if they're targeted or pressured to share images.
  • Communicate with parents and carers. Share signs to watch for, how to talk about consent and coercion, and where to report concerns.
  • Agree expectations with any platforms your community uses: default safety settings, age checks, takedown speed, and data handling.
  • Ensure access to trauma-informed pastoral and mental health support for affected students.

Questions to put to any platform used by your community

  • How do you prevent minors from accessing harmful features? What age assurance is in place?
  • Are safety guardrails on by default, and do they block sexualised image generation?
  • What are your response times for illegal imagery? How do you escalate to law enforcement?
  • Do schools get admin visibility, audit logs, and a clear takedown process?
  • What data is stored, for how long, and how quickly can it be deleted on request?
  • Are protections identical across free and paid tiers?
  • How are third-party AI models vetted and monitored?

How to talk to students about AI image abuse

  • Keep definitions clear and age-appropriate. Avoid graphic detail; focus on consent, legality, and harm.
  • Reinforce: don't reshare, report immediately, and seek help from a trusted adult.
  • Normalize help-seeking and protect the target from blame. Address bystander responsibility.
  • Document incidents, safeguard first, and follow your referral pathways.

What to watch next

Ofcom's investigation is under way, with requests for regular updates from the Committee. Expect stronger expectations on platforms serving UK users, including safety by default and faster takedowns.

In the meantime, document your school's actions, review policies termly, and keep governors and trustees informed. This is a moving target-tight feedback loops and clear accountability help you stay ahead.

Useful resources

Staff development: build AI literacy and safety capability

If your team needs structured training on AI use, risk, and classroom practice, explore role-specific options. Start with a short, practical course plan and build from there.

Browse AI courses by job role


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide