AI in Schools Can Help Learning-But Kids Need Protection From Misinformation
Researchers at Children's Hospital of Philadelphia found that artificial intelligence can support child development through better learning and creativity, but the technology poses real risks without proper safeguards. The findings, published this month in the journal Pediatrics, come as nearly two-thirds of teens have already used an AI chatbot.
The study compiled early research on AI's effects across different age groups. Dr. Robert Grundmeier, a primary care pediatrician at CHOP, led the review with colleagues from pediatrics, psychology and AI expertise.
"Right now, many of us are participating in this natural experiment of figuring out how these tools might be useful and in what ways they might be harmful," Grundmeier said. "What we really need is some organized and rigorous research to really help to answer those questions."
Young Children: Language Gains, but Confusion About Reality
For children 5 and younger, interactive AI storytelling programs can support vocabulary development and family engagement. Grundmeier described a practical example: a tired parent using an AI tool to generate a personalized bedtime story rather than creating one from scratch.
The risk emerges when young children cannot distinguish between AI and genuine human interaction. "Although it can appear to be empathic, it can in many ways pretend to be human, it fundamentally is not human," Grundmeier said. "It is just a lot of mathematics happening behind the scene."
Older Children: Overreliance and Lost Skills
As children use AI in school or at home, the technology can tailor education to individual needs and address gaps in reading and math. But educators worry about "de-skilling"-when students lose abilities through overreliance on AI.
In early childhood education, the concern extends to "never-skilling," where children never learn a task because they ask AI to do it instead of using the tool to help them learn.
Teens: Misinformation and Mental Health Risks
Teens face a different challenge: difficulty identifying when AI produces false information. This risk intensifies when young people consult AI chatbots about mental health or suicidal thoughts.
"There's research that shows that some of these AI tools when discussing mental health care topics, they can provide really very bad advice," Grundmeier said.
On the positive side, researchers documented teens using AI constructively-as a coach for difficult conversations, or to learn about subjects without other resources.
What Parents and Schools Need
Many families want guidance on how AI works and how to ensure safe use. Parents frequently tell Grundmeier: "I don't really understand this, it scares me. My child is getting exposed to it, but I don't know how to guide them."
Pennsylvania is establishing AI literacy programs, safety standards and reporting tools for harmful AI affecting children and vulnerable populations. Educators should understand these tools to help students evaluate information critically.
Explore AI for Education resources, or consider the AI Learning Path for Teachers to build expertise in guiding students through AI safely.
Your membership also unlocks: