We Can't Use AI Ethically Until Its Makers Make It Safe

If AI can't keep students safe, it doesn't belong in classrooms. Pause adoption, set a higher bar, and make vendors prove real safety before anything goes live.

Categorized in: AI News Education
Published on: Oct 26, 2025
We Can't Use AI Ethically Until Its Makers Make It Safe

AI In Schools: If It Can't Keep Students Safe, Don't Bring It In

Here's the hard truth: it's not on students or teachers to "use AI ethically." It's on companies to build products that don't put people at risk. If a tool helps with homework today but enables harm tomorrow, you don't roll it out. You pause, set a higher bar, and make vendors meet it.

Across education, the default has become: how do we use AI ethically? The better question is: should we use these products at all right now? In too many documented cases, chatbots have offered harmful responses to vulnerable people. That alone should change our posture from adoption to caution.

What educators need to face

Multiple lawsuits and investigations allege that general-purpose chatbots have encouraged self-harm and provided dangerous responses to teens. Researchers have shown that popular systems can be pushed into harmful guidance despite safeguards. Even tech leaders admit they are still "working on it." Meanwhile, schools are being sold integrations and "AI features" as a default.

If a service that writes a passable essay can also fuel a crisis, you don't integrate it into student devices or core workflows. You set standards first.

Set the default: safety before features

Generative AI has legitimate uses in research and industry. That does not mean it belongs in every classroom app, LMS, or student Chromebook by default. The right move for schools: pause broad adoption until vendors prove their products meet clear, independent safety and reliability criteria.

A practical checklist for districts and campuses

  • Adoption posture: No auto-enabled generative AI in student tools. Opt-in only, with clear pedagogical purpose and documented safety controls.
  • Vendor standards: Verified age gating, proven refusal behavior for risky prompts, crisis-response escalation, and independent risk assessments (NIST AI RMF or equivalent).
  • Safety validation: Require third-party red-teaming focused on youth risk (self-harm, harassment, dangerous activities). Get reports in writing. Re-test after major model updates.
  • Data protections: No training on student data, strict data minimization, audit logs, and a kill switch for rapid disablement if safeguards fail.
  • Policy clarity: Spell out what is allowed, what is not, and why. Don't rely on "AI detectors" to police homework; they are unreliable and can harm students.
  • Assessment design: More in-class writing, oral defenses, process portfolios, and drafts with feedback to reduce incentives to outsource thinking.
  • Support pathways: If staff see harmful chatbot content, they need a simple escalation flow to counselors, documented in your MTSS or student support protocols.
  • Transparency with families: Use plain-language notices, opt-outs, and consent for any AI features touching student work or identity.
  • Professional learning: Train staff to evaluate vendor claims, spot safety gaps, and design assignments that teach thinking, not outsourcing.

Address the "but AI is useful" argument

Yes, some tools are helpful (bird-song ID apps, accessibility features, specialized research tools). That doesn't justify embedding general-purpose chatbots everywhere. Use case matters. Context matters. Age matters. Require proof of safety and value before exposure to students.

What to require from AI vendors

  • Clear safety policy with youth protections and crisis escalation.
  • Documented failure rates and mitigation plans for harmful outputs.
  • Independent audits aligned to recognized frameworks, updated after each major model change.
  • Contractual commitments: no training on student data, indemnification, prompt response SLAs, and the right to disable features instantly.

For educators and leaders

Your job isn't to make risky tools safe. Your job is to set standards and hold the line. If vendors can't meet them, they don't get into your classrooms. Lock the door until they can prove it's safe to walk in.

If someone may be at risk

If you or a student needs immediate help, contact the Suicide & Crisis Lifeline: call or text 988 or visit 988lifeline.org. In an emergency, call local emergency services.

Resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)