Deepfakes and Phishing Hit Classrooms as 41% of Schools Report AI-Linked Incidents

AI use is rising in schools while policy trails: 41% faced cyber incidents; 30% saw deepfakes. Close the gap with zero-trust, formal rules, staff training, and vetted tools.

Categorized in: AI News Education
Published on: Oct 03, 2025
Deepfakes and Phishing Hit Classrooms as 41% of Schools Report AI-Linked Incidents

41% of Schools Report AI-Related Cyber Incidents. Policy Is Behind.

A survey of 1,400+ education leaders in the US and UK shows a clear gap: AI is being used across classrooms and faculty work, but formal policy and protections lag behind. 41% of schools reported AI-related cyber incidents like phishing and misinformation. Nearly 30% saw harmful AI content, including student-made deepfakes.

Most institutions allow AI use (86% for students, 91% for faculty), yet many rely on informal guidelines. 90% of leaders are concerned about AI-driven threats, and only one in four feel very confident identifying deepfakes or AI-enabled phishing.

"AI is redefining the future of education, creating extraordinary opportunities for innovation and efficiency," says Darren Guccione, CEO and co-founder of Keeper Security. "But opportunity without security is unsustainable. Schools must adopt a zero-trust, zero-knowledge approach to ensure that sensitive information is safeguarded and that trust in digital learning environments endures."

What this means for your institution

The risk is no longer theoretical. Generative tools can speed up phishing, impersonation, and misinformation at a scale schools have never faced. Student-created deepfakes raise academic integrity and wellbeing concerns.

Policy isn't paperwork. It's how you reduce exposure: what tools are approved, what data can be shared, how AI use is disclosed, and who is accountable when things go wrong.

A clear way forward

Anne Cutler, cybersecurity evangelist at Keeper Security, puts it plainly:

"Artificial Intelligence (AI) is already part of the classroom, but our recent research shows that most schools are relying on informal guidelines rather than formal policies. That leaves both students and faculty uncertain about how AI can safely be used to enhance learning and where it could create unintended risks. What we found is that the absence of policy is less about reluctance and more about being in catch-up mode. Schools are embracing AI use, but governance hasn't kept pace. Policies provide a necessary framework that balances innovation with accountability. That means setting expectations for how AI can support learning, ensuring sensitive information such as student records or intellectual property cannot be shared with external platforms and mandating transparency about when and how AI is used in coursework or research. Taken together, these steps preserve academic integrity and protect sensitive data."

Action checklist for school leaders

  • Publish a formal AI use policy for students, faculty, and staff. Include approved tools, disclosure rules, assessment integrity, and consequences.
  • Protect data: prohibit sharing PII, student records, or IP with external AI platforms; require data-minimization and anonymization.
  • Adopt zero-trust: enforce MFA, least-privilege access, device compliance, and session timeouts. Use password managers with end-to-end encryption and zero-knowledge architecture.
  • Build detection skills: train staff to identify AI-driven phishing, deepfakes, and synthetic media. Run simulated phishing and deepfake awareness drills.
  • Secure the classroom: set expectations for AI assistance on assignments and research. Require students to disclose AI use and cite outputs.
  • Vendor due diligence: review AI tool data handling, model training practices, storage regions, and deletion rights. Update DPIAs and contracts.
  • Content integrity: use provenance checks where possible, maintain audit trails, and review tools that offer content authenticity signals.
  • Incident response: create playbooks for phishing, account compromise, and malicious deepfakes targeting staff or students. Define reporting channels.

30-60-90 day plan

  • First 30 days: Form a cross-functional AI governance group (IT, curriculum, safeguarding, legal). Freeze unvetted tools. Issue interim guidance on disclosure and data sharing.
  • Days 31-60: Approve a baseline AI toolset. Roll out MFA and password manager adoption. Launch faculty and staff training on AI threats and safe use.
  • Days 61-90: Finalize your AI policy. Update assessment design to reduce prompt-only answers. Test incident response with tabletop exercises.

Key stats at a glance

  • 41% of schools have experienced AI-related cyber incidents.
  • ~30% report harmful AI content, including student-created deepfakes.
  • 86% allow student AI use; 91% allow faculty use.
  • 90% express concern about AI threats; only 25% feel very confident spotting them.

Helpful resources

Upskill your faculty

If your staff needs fast, practical training on safe and effective AI use, explore curated options by role.

Browse AI courses by job at Complete AI Training

Bottom line: AI is already in your classrooms. Close the policy gap, raise staff capability, and lock down identity and data now-before the next incident forces your hand.