Government consultation to probe AI chatbot risks for children - and move fast
The Government will open a consultation next week on strengthening online safety for children, with AI chatbots set to be a central focus. Departments will seek views from experts, parents, young people, teachers, and industry to identify practical measures that can be implemented quickly.
Officials are paying close attention to the risk of young people forming "emotionally dependent relationships" with conversational AI. The concern: increasingly human-like bots may encourage vulnerable users to share intimate thoughts and ascribe empathy where none exists.
Why chatbots are on the agenda
"As conversational AI systems have become more and more life-like, increasing numbers of young people are using chatbots as companions," a DSIT spokesperson said. Early research warns that some children may become entangled in relationships with chatbots that feel caring or responsive, even when the system is only simulating those traits.
The full effects are not yet known. The consultation will test whether features that mimic human relationships - voice, avatars, tone, memory, and long-form dialogue - increase risks for children and how those features should be limited or redesigned.
Beyond chatbots: DM, pairing tools, and livestreaming
The exercise will also review risks from direct messaging, stranger-pairing tools, and live streaming. Ofcom data indicates 57% of UK children aged 3-17 have used livestreaming apps or sites, rising to around 80% for those aged 13-15.
See Ofcom's ongoing research on children's media use for baseline evidence and trend data: Children and parents: media use and attitudes.
Timelines and legal context
DSIT signalled it intends to act "within months, not years" after the consultation closes. Under the Online Safety Act 2023, now in force, platforms must enforce age limits consistently and protect child users, with Ofcom setting and enforcing codes.
For implementation status and guidance, see Ofcom's Online Safety hub: ofcom.org.uk/online-safety.
Policy options on the table
Mandatory overnight curfews to support sleep may be explored, alongside age thresholds and exceptions. The consultation will also consider safer-by-default product designs and stronger controls on high-risk features.
Positions from ministers and stakeholders
Technology Secretary Liz Kendall said: "We're launching the most ambitious consultation of its kind, looking at a sweep of measures to make every part of children's online lives safer." She emphasised the goal of giving young people "the childhood they deserve" while preparing them for a future with fast-moving technology.
Andy Burrows, chief executive of the Molly Rose Foundation, called the consultation "a crucial opportunity to decisively strengthen online safety laws and stand up for children and families," urging solutions that follow the evidence rather than quick fixes.
Shadow science secretary Julia Lopez argued for a complete social media ban for under-16s and removing phones from schools. She framed the consultation as an alternative to immediate legislative action.
What government teams can action now
- Risk-rate chatbot features: identity/persona realism, memory persistence, romantic/therapeutic framing, voice cloning, and long-session dialogue. Prioritise restrictions on high-risk combinations.
- Mandate transparency cues: persistent AI labels, synthetic voice disclosures, and "this is a bot" reminders after set intervals or emotional content.
- Constrain anthropomorphism: ban romantic/sexualised companion modes for minors; default to neutral tone and limited affect; prohibit claiming empathy, care, or confidentiality.
- Session and content guards: cooling-off timers, sentiment spikes detection, escalation to human help resources, and safe-exit prompts for distress indicators.
- Data minimisation by design: block collection of sensitive data from minors; strip long-term memory; require child-specific retention and deletion rules.
- Age assurance standards: align with Ofcom/ICO guidance; verify before enabling higher-risk features like private DMs or livestreaming.
- Auditability: require immutable logs for safety incidents, feature usage, and child-risk metrics; set thresholds for incident reporting to Ofcom.
- Procurement levers: include child-safety-by-design criteria in contracts; require third-party safety evaluations and red-teaming focused on grooming and dependence risks.
- Default discovery limits: opt-out of stranger-pairing; restrict recommendations leading to parasocial attachment content.
- Crisis pathways: integrate signposting to trusted helplines and moderation escalation for self-harm, abuse, or grooming signals.
Key questions to test in consultation
- Which chatbot features most strongly increase emotional dependence among minors, and at what thresholds should they be restricted or disabled?
- What evidence-based cooling-off or time-bound measures reduce harm without removing beneficial use (homework help, literacy, creative play)?
- How should platforms prove effective age assurance for enabling DMs, pairing, or livestreaming?
- What metrics should Ofcom require (e.g., disclosures of intimate thoughts, session length, sentiment variance, repeat-companionship sessions)?
- Where should legal liability sit for "companion" marketing, therapy-like claims, or failure to disable risky modes for minors?
- What enforcement cadence enables changes "within months" while giving industry clear technical standards?
Evidence and research needs
Government will need robust studies on attachment formation with AI, dose-response effects of long-session usage, and the impact of design cues on perceived empathy. Priority data includes session length distribution, user age bands, and rates of intimate self-disclosure to bots.
Independent evaluations and youth panels should validate design choices before rollout. Public reporting will help maintain trust and reduce the incentive to over-humanise bots for engagement.
How departments can prepare submissions
- Map current and near-term features used by UK minors across major platforms; identify quick wins that can be mandated via codes or terms enforcement.
- Draft a minimum standard for "child-safe conversational AI" covering identity cues, memory limits, and emotion-related responses.
- Propose measurable guardrails and incident thresholds suitable for Ofcom oversight within existing Act powers.
- Engage schools, CAMHS, and safeguarding leads early to align on crisis referral and messaging.
For teams developing policy capability around AI governance and safety assessments, see the AI Learning Path for Policy Makers.
The consultation opens next week. Departments, agencies, and local authorities should coordinate evidence now, with a focus on concrete, testable standards that reduce risk without blocking useful services for young people.
Your membership also unlocks: