One in five high schoolers report AI romance as classroom use rises amid growing deepfake and data breach concerns

AI 'relationships' are rising in schools, with more privacy risks, deepfakes, and thinner teacher connection. Districts should cut data exposure, set guardrails, and train staff.

Categorized in: AI News Education
Published on: Oct 09, 2025
One in five high schoolers report AI romance as classroom use rises amid growing deepfake and data breach concerns

AI "relationships" are showing up in high school. Here's what educators need to do now

New survey data from the Center for Democracy and Technology (CDT) points to a clear trend: romantic and companion-style use of AI is already part of student life. Nearly 1 in 5 high schoolers say they or someone they know has had a romantic relationship with AI. And 42% say they or someone they know has used AI for companionship.

AI use is now routine for school communities. In the last school year, 86% of students, 85% of educators, and 75% of parents used AI in some form. The big takeaway: the more AI shows up in school systems and workflows, the more risk signals appear.

Key numbers at a glance

  • "High AI use" schools correlate with more reports of students who view AI as a friend or romantic partner.
  • Data risk grows with adoption: 28% of teachers who use AI for many tasks say their school had a large-scale data breach, vs. 18% among low-use peers.
  • 31% of students who chat with AI for personal reasons did so on school-provided devices or software.
  • Only 11% of teachers received training on how to respond if a student's AI use is harming their wellbeing.

Why higher AI use often brings higher risk

More AI in school operations means more data in motion. More inputs, more outputs, more vendors, more exposure. CDT's data suggests that as schools expand AI use, they see more breaches, more product failures in class, and more strain on community trust.

AI monitoring on school-issued devices can also trigger false positives. Those false alarms have, in some cases, escalated to serious consequences for students. Meanwhile, students with personal devices can keep more of their lives off the school radar, creating a fairness gap.

Deepfakes are now a tool for harassment. Manipulated images and videos are showing up as a new vector for bullying and sexual misconduct. This isn't a future risk-it's here.

Student wellbeing and connection

In schools with heavier AI use, students report more personal AI chats for mental health support, companionship, escape from reality, and romantic interaction. Many of those conversations are happening on school hardware and software, which adds privacy and duty-of-care concerns.

There's a classroom climate cost, too. Educators who use AI frequently report instructional benefits-time saved, better differentiation-yet students in these settings are more likely to say they feel less connected to their teachers. That's a signal to recalibrate how AI shows up in daily instruction.

Action plan for districts and school leaders

  • Map your AI footprint: list every AI-involved tool, feature, and workflow (instruction, grading, monitoring, comms, security). Cut what isn't essential.
  • Minimize data: default to the least data necessary. Turn off logging where possible. Avoid feeding student PII into cloud models.
  • Vendor due diligence: require clear data-use terms, model training disclosures, retention limits, and rapid breach notification.
  • Create an AI incident playbook: who to contact, how to preserve evidence, how to support targets of deepfakes or sextortion, and how to communicate with families.
  • Rethink device monitoring: narrow the scope, review alert thresholds, and add human review before escalation. Track false positive rates.
  • Equity check: if monitoring is required, provide pathways for privacy-respecting work (e.g., local offline apps, on-device processing) to reduce surveillance disparities.
  • Classroom guardrails: define acceptable AI use, what must be human-led, and when students should opt out of AI features.
  • Wellbeing protocols: train staff to spot AI-related distress (isolation, obsessive chatting, risky disclosures) and how to refer to counselors.
  • Teach AI literacy beyond "how to prompt": consent, manipulation, parasocial bonds, deepfake detection, data trails, and reporting harm.
  • Protect teacher-student connection: commit to daily human touchpoints (check-ins, feedback, conferences) that AI will not replace.

How to talk to students about AI "companions"

Be direct: AI is a tool that mimics care; it doesn't have needs, boundaries, or accountability. It predicts words; it doesn't build trust. That difference matters when conversations turn intimate.

Cover three points in advisory or health class: consent (what it means with non-humans), privacy (logs and data persistence), and manipulation (models optimize for engagement, not wellbeing). Include what to do if an AI interaction feels exploitative or unsafe-and where to get human help.

Minimum safeguards checklist

  • Parental and student notices for any AI that touches student data.
  • Age-appropriate defaults and content filters on all AI tools.
  • Clear prohibition on uploading sensitive media of peers; fast takedown process for deepfakes.
  • Staff training on AI-related harm (annual), with midyear refreshers.
  • Privacy reviews before piloting new AI features; sunset dates for pilots without review.
  • Regular audits of monitoring alerts, false positives, and outcomes.

Professional development and guidance

Start with the source research and federal guidance:

If you need practical training for staff on safe, effective classroom use, see curated options by role: AI courses by job.

The bottom line

AI is already part of student relationships and school operations. Treat it like any high-impact technology: reduce data exposure, set clear guardrails, invest in staff training, and protect the human connection that keeps students safe and engaged.