Mind launches year-long inquiry into AI and mental health after Guardian reveals dangerous Google advice

Mind will run a year-long inquiry into AI and mental health after reports of dangerous advice from Google's AI Overviews. It seeks evidence, safeguards and stronger oversight.

Categorized in: AI News Healthcare
Published on: Feb 21, 2026
Mind launches year-long inquiry into AI and mental health after Guardian reveals dangerous Google advice

Mind launches year-long inquiry into AI and mental health after safety concerns

Mind, the mental health charity for England and Wales, is launching a year-long commission into artificial intelligence and mental health after an investigation exposed "very dangerous" advice in Google's AI Overviews. The inquiry - described as the first of its kind globally - will examine risk, safeguards and standards as AI tools increasingly influence health decisions.

The commission will bring together clinicians, researchers, people with lived experience, health providers, policymakers and tech companies. The aim is clear: build a safer digital mental health ecosystem with strong regulation, evidence-based standards and practical guardrails.

Why this matters for healthcare

AI Overviews are shown to billions each month and sit above traditional search results. Investigations found inaccuracies across cancer, liver disease, women's health and mental health - including advice on psychosis and eating disorders that experts called "very dangerous."

Dr Sarah Hughes, Mind's CEO, warned that "dangerously incorrect" guidance can stop people seeking treatment, reinforce stigma and, in the worst cases, put lives at risk. The clinical impact is real: delayed care, harmful self-management and erosion of trust in legitimate services.

Mind's perspective

Hughes said AI has "enormous potential" to widen access and strengthen services, but only if it's developed and deployed responsibly with safeguards proportionate to the risk. "People deserve information that is safe, accurate and grounded in evidence, not untested technology presented with a veneer of confidence."

Rosie Weatherley, Mind's information content manager, noted that while pre-AI search "wasn't perfect," users often reached credible sites with nuance, lived experience, case studies and clear routes to support. AI Overviews, by contrast, can offer a clinical-sounding summary that "gives an illusion of definitiveness" while removing essential context and trust signals.

What Google says

Google says AI Overviews are "helpful" and "reliable" and that it invests significantly in their quality, especially for health. It also says it displays relevant, local crisis hotlines when queries suggest someone might be in distress, and that it cannot comment on specific examples without reviewing them.

After reports surfaced, Google removed AI Overviews for some medical searches - but not all.

What the commission will do

  • Gather evidence on where AI helps or harms people with mental health problems.
  • Create an open space for lived experience to be "seen, recorded and understood."
  • Develop proposals for regulation, standards and safeguards across products and services.
  • Convene clinicians, providers, policymakers and tech leaders to align on safe deployment.

Actions healthcare teams can take now

  • Guide patients to trusted, evidence-based sources and make those links highly visible in discharge notes, portals and clinic comms. Use frameworks like the NHS Digital Technology Assessment Criteria (DTAC) to vet tools you recommend. NHS DTAC
  • Update clinical safety cases, SOPs and risk registers to include AI-generated information encountered by patients (e.g., search summaries, chatbots). Define escalation paths for content that could discourage treatment or increase self-harm risk.
  • Educate staff and patients on the limits of AI Overviews: they may omit context, cite weak sources or be flat-out wrong. Reinforce that mental health advice should be grounded in clinical guidance and local pathways.
  • Require transparent sourcing and human review for any AI features embedded in your services. For high-risk topics (suicide, psychosis, eating disorders), mandate human-in-the-loop and crisis routing.
  • Establish incident reporting for harmful AI advice patients encounter outside your service (e.g., via search). Share patterns with local ICSs, professional bodies and, where relevant, platform safety teams.
  • For policy and governance teams, align risk management to international guidance on AI for health. WHO guidance on ethics and governance of AI for health
  • Build literacy across your organisation so clinicians, digital teams and comms staff speak the same language on model limits, bias, safety and evaluation. See AI for Healthcare for practical training resources.

What to watch next

Over the next year, expect Mind's commission to publish findings and recommendations for safer design, deployment and oversight of AI in mental health. There will likely be opportunities for clinicians, providers and people with lived experience to contribute evidence and case studies.

The goal isn't to slow useful innovation. It's to ensure the tools people meet first - often in a search box - do not cause harm and can be trusted when it matters most.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)