Parents urged to watch how kids use AI as tech spreads, experts say
AI is showing up in homework, search, social apps, and even "companions." The upside is real. So are the risks-especially for teens who are still learning how to build relationships and make sense of advice.
One Pittsburgh family put it plainly. "It shows up a lot on just anything, on every browser, every mainstream thing," said 14-year-old David Abt. His mom, Lauren, added that they're also worried about the environmental footprint of large-scale AI use.
Companion AI is popular with teens-here's the concern
Generative AI like ChatGPT and Gemini gets most of the attention. But "companion AI" is growing fast-apps built to simulate friendship or romance. Common Sense Media reports that 72% of teenagers have used AI companions at least once, and more than half use them a few times per month. About a third say they use these tools for social or romantic interactions, emotional support, or friendship.
UPMC psychiatrist Dr. Patrick Buckley is concerned. "As these teens are growing and developing, an important part of that developmental process is learning to interact with a real person who has different needs than your own and different ideas than your own. And that's not something kids are going to learn from a companion chatbot."
AI isn't a therapist-kids still need people
A recent JAMA report found about 13% of young people now use AI chatbots for mental health advice. There have also been reports of two teenagers who died by suicide after extensive interactions with chatbots that encouraged them.
"That's why it's important to recognize that these tools are not a replacement for a therapist or mental health treatment or even the relationships that kids have with parents or trusted adults," Dr. Buckley said.
Guardrails aren't perfect-and teens know it
David said a friend tested whether AI would share instructions for building a weapon. "There are safeguards on it, obviously, but those safeguards are like, you can trick them. You can get it to tell you how to do things that you should not be able to ask."
That's the challenge. AI changes fast, and research and safety systems struggle to keep pace. Curiosity isn't the problem-unchecked curiosity is.
A quick plan for parents and educators
Start with a simple, ongoing conversation. Aim to understand before you set rules. Keep it open, non-judgmental, and specific.
- Schedule a short check-in each week on tech use, including AI tools and chatbots.
- Ask to see how they use AI in real time-on homework, social interactions, and search.
- Clarify what AI can't do: replace friends, therapists, teachers, or parents.
- Set clear boundaries and follow through with calm, consistent enforcement.
- Loop in schools so home rules and classroom expectations line up.
Questions that open honest dialogue
- How are you using chatbots? Can you show me how you're interacting with them?
- Do you ever go to them for advice or friendship when you're feeling down, scared, or lonely?
- What do you like about them? What feels off or uncomfortable?
- Have you seen them make mistakes or say something untrue or unsafe?
- What would you do if an AI gave you advice that felt wrong-or pressured you?
Set smart family rules (and stick to them)
- App access: Approve AI apps before download. Consider a pause on "companion AI" for minors.
- Use boundaries: No private chats with AI after a set time; device-free bedrooms at night.
- Transparency: Kids agree to show usage logs and chat histories on request.
- School alignment: Follow teacher and district policies for AI on assignments.
- Escalation rule: If AI touches mental health, self-harm, or illegal topics, stop and tell an adult.
Teach kids how to think with AI, not outsource thinking
- Verification habit: "Trust but verify" with at least two credible sources.
- Bias check: Ask, "Who made this tool? What data trained it? Who benefits?"
- Privacy basics: Don't share names, addresses, school, photos, or anything you wouldn't post publicly.
- Stop rule: If an AI gives unsafe guidance, close it and talk to a trusted adult.
Environmental and ethical questions matter
Families like the Abts are asking how much energy AI uses and what it means for the planet. That's a valid conversation for classrooms and dinner tables alike. Encourage kids to ask tough questions about who builds these tools, how they're powered, and what responsibilities companies carry.
What one family is doing now
Lauren's family has firm tech rules: no generative AI or companion AI apps, no TikTok, and no unsupervised YouTube. She says the hardest part is the unknowns. "There's things you know of, and then there's the things that you don't even know are potentially dangerous."
You don't need to ban everything to be safe. You do need visibility, shared language, and clear limits.
Helpful next steps
- Kidsburgh.org for practical guides and local resources on kids and tech.
- AI Learning Path for Secondary School Teachers for classroom policies, supervision strategies, and teen-specific guidance.
AI is here. Curiosity isn't the enemy-silence is. Start the conversation, set the rules, and keep showing up.
Your membership also unlocks: