Yew Tree Primary May Quit X Over Grok Deepfake Fears

A Sandwell primary may quit or pause X after reports its AI tool made sexualised images of children; Ofcom is investigating. Leaders are weighing parent comms against safety.

Categorized in: AI News Education
Published on: Jan 15, 2026
Yew Tree Primary May Quit X Over Grok Deepfake Fears

Primary school weighs leaving X over AI image abuse fears

A primary school in Sandwell is considering pausing or closing its account on X after reports that the platform's AI tool, Grok, has been used to manipulate photos to create sexualised images of real people, including children. The tool is currently being investigated by Ofcom for online safety concerns.

Headteacher Jamie Barry said: "We want to use social media to celebrate our school and our community, but it has to be on a platform that does not put our children or our staff at risk."

Why this matters for schools

Yew Tree Primary set up its X account in 2019 to rebuild trust after a tough Ofsted period. Since then, it has become a key channel for parents, carers, and prospective families to see the school's values and learning in action.

The immediate concern isn't just the AI tool. It's how the platform's leadership has responded to the allegations. Barry said: "If an organisation has a safety flaw, you would expect a quick and efficient response. Elon Musk's suggestion that this is about the UK censoring free speech is extremely concerning."

School leaders at Yew Tree are now debating whether to pause or permanently close the account. "It's such an established platform for us," Barry said. "Parents now expect updates there, so we don't want to disband it unnecessarily. But we're seeing reputable organisations pause or leave X altogether, and that leaves us with a real dilemma."

A decision is expected soon. "At this stage, we're leaning towards leaving - certainly pausing it for the time being," Barry added.

Practical next steps for education leaders

  • Run a fresh risk assessment on all social channels. Document how you'll mitigate image misuse, impersonation, and AI-generated manipulation.
  • Tighten image and consent policies. Prioritise group shots, reduce resolution, avoid close-ups of individual children, and add clear watermarks or overlays.
  • Shift critical updates to owned channels first (school website, email, MIS/parent apps). Treat social as secondary distribution.
  • Limit features that increase exposure. Review DMs, tagging, location data, and who can reply or mention the school.
  • Prepare parent communications for any pause or platform change. Offer clear alternatives and timelines.
  • Brief staff on posting guidelines and escalation routes. Keep a single point of approval for images and captions.
  • Trial safer alternatives for community updates (e.g., closed parent portals or newsletters) and review uptake before exiting any platform.
  • Report concerns to regulators and your local authority. See Ofcom's online safety duties for context.

What to watch

Follow the outcome of Ofcom's investigation and any platform policy changes that address AI misuse. If you choose to remain on X, set a review date and clear safety thresholds that would trigger a pause or exit.

Helpful resources

Reviewing your school's AI and social media approach? This curated catalog can help you upskill relevant staff and governors: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide