AI Use in Schools Soars as Data Breaches, Bullying and Deepfakes Rise

AI is widespread in schools (86% of students and 85% of teachers), with higher use tied to breaches, bullying, and deepfakes. Districts are urged to add guardrails and training.

Categorized in: AI News Education
Published on: Oct 11, 2025
AI Use in Schools Soars as Data Breaches, Bullying and Deepfakes Rise

AI Use Is Up in Schools. So Are the Risks

New survey data from the Center for Democracy & Technology shows AI usage has become mainstream in secondary education. During the 2024-25 school year, 86% of students and 85% of teachers used AI. Higher usage correlated with more student risk. That includes data breaches, bullying and harassment, tool failures, and troubling student-AI interactions.

CDT and partner groups urged the U.S. Department of Education to apply its July guidance on responsible AI as it awards grants and steers research. Their Oct. 7 letter to Education Secretary Linda McMahon highlights the gap between AI's growth and school readiness. Concern is also rising over how the department will execute AI work following the Office of Educational Technology's closure earlier this year.

What the Data Says

Half of students said AI in class makes them feel less connected to their teacher. Another 38% said it's easier to talk to AI than to their parents. Only 11% of teachers reported being trained on how to respond if a student's AI use could harm their well-being.

AI companions are becoming a fixture for some teens. During 2024-25, 42% of students said they or friends used AI for mental health support, as a friend or companion, or to escape real life. Nineteen percent reported using AI for a romantic relationship. Youth mental health groups warn these tools can intensify risks for students with depression, anxiety, ADHD or bipolar disorder.

Cybersecurity risk also grows with teacher reliance on AI for school work. Schools with more AI use were more likely to report large-scale data breaches and ransomware attacks. Deepfakes are showing up, too: 36% of students reported a deepfake incident at school last year.

Despite that, less than a quarter of teachers said their schools have policies addressing deepfakes, including sexualized content. In May, President Donald Trump signed the Take It Down Act, which criminalizes creating deepfake images without consent. Schools need policies and response plans that reflect the law and state requirements.

CDT ran online surveys from June to August 2025 with 1,030 students (grades 9-12), 806 teachers (grades 6-12), and 1,018 parents (grades 6-12). The findings point to a simple truth: more AI without clear guardrails increases harm.

What Districts Should Do Now

  • Adopt clear guardrails. Update acceptable use, student code of conduct, and staff policies to cover AI, including impersonation, harassment, cheating and misuse.
  • Align with federal guidance. Use the U.S. Department of Education's AI guidance to shape procurement, classroom use and oversight. Read the guidance.
  • Address AI companions. Restrict or block AI companion apps on school networks and devices for minors. Teach students that these tools are not human and encourage conversations with a trusted adult.
  • Train staff. Provide annual PD on student well-being, AI red flags, and escalation steps. Include counselors, nurses and SROs in response protocols.
  • Teach digital literacy. Add modules on deepfakes, parasocial relationships with AI, and safe help-seeking. Practice critical evaluation of outputs in core classes.
  • Strengthen data protection. Minimize data shared with AI tools, disable training on student data, and require privacy contracts (FERPA/COPPA). Run vendor risk assessments and log audits.
  • Build incident response. Create playbooks for deepfakes, sexualized content, and breaches. Include reporting channels, parent communications and law enforcement contacts.
  • Set age-appropriate access. Gate advanced tools by grade and purpose. Default to teacher-mediated use for younger students.
  • Engage families. Offer guidance on safe AI use at home and provide opt-in/opt-out choices when tools process student data.
  • Measure and iterate. Track incidents, survey students and staff, and review policies each semester.

Procurement Checklist for AI Tools

  • Written privacy commitments; no training on student data; data deletion timelines.
  • Age gating, content filters and controls that block sexualized outputs and companion features.
  • Bias and safety evaluations; documented hallucination rates and mitigation.
  • Human-in-the-loop controls; clear audit logs; admin dashboards.
  • Exportability of data and prompts; easy "off switch" for incidents.
  • Service-level guarantees, security attestations, and breach notification timelines.

Deepfake Policy Essentials

  • Define prohibited conduct (impersonation, sexualized content, election interference, threats).
  • Set reporting channels for students and staff; protect reporters from retaliation.
  • Outline investigation steps, evidence preservation and escalation to law enforcement.
  • Educate students on legal consequences, including the Take It Down Act. Learn about the law.

30-60-90 Day Plan

  • Next 30 days: Name an AI safety lead, freeze high-risk tools, publish interim guidance, and launch a staff briefing.
  • Next 60 days: Run vendor privacy reviews, add deepfake response steps to your crisis plan, and start student lessons on AI literacy.
  • Next 90 days: Finalize policies, sign data protection addenda, conduct a tabletop exercise, and publish a family-facing AI FAQ.

Where to Invest in Training

Given that only 11% of teachers received training on high-risk student AI use, targeted PD is overdue. Focus on student safety, privacy-by-design, and classroom practices that keep a human at the center of learning. If you need structured options, review professional AI course paths by role. Explore training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)