Virginia Advances Bill to Require AI, Scam and Misinformation Lessons in Schools

Virginia's HB 171 would update school internet safety lessons to cover scams, misinformation, and AI. It passed the House with bipartisan support and heads to the Senate.

Categorized in: AI News Education
Published on: Feb 14, 2026
Virginia Advances Bill to Require AI, Scam and Misinformation Lessons in Schools

Virginia bill would require schools to teach AI, scam and misinformation safety

February 13, 2026

House Bill 171 has cleared the Virginia House of Delegates with bipartisan support and is now in the Senate. The proposal updates the state's internet safety curriculum to directly cover online scams, misinformation, and artificial intelligence - areas students encounter daily.

Del. Alex Askew said the measure builds on prior efforts and addresses newer risks such as addictive social feeds, privacy settings, data sharing, and how to report harassment or threats. "The current rules cover basics, but our children encounter new dangers, like addictive algorithms, cyber bullying, AI generated harms," he told the House Education Committee on Jan. 12.

What HB 171 adds

  • Explicit instruction on online scams, misinformation, and artificial intelligence.
  • Practical guidance on privacy controls, data sharing, and digital footprints.
  • Awareness of algorithmic feeds and how design choices shape attention and behavior.
  • Clear steps for reporting harassment, threats, or suspicious activity.

Parent and educator groups backed the bill. "I know it's hard to stay ahead of kids, but I think it's important that we try," said Virginia PTA president-elect Lauren Klute, adding that parents would benefit too. Meg Gruber of the Virginia Education Association noted students will carry these skills home: "They're going to teach their elders."

Why this is timely

AI tools are common among teens, with many using chatbots daily. Researchers have flagged both potential upsides (better study support, reduced loneliness) and risks (bad information, weak empathy, reinforcement of harmful behavior) while the evidence base is still developing.

Recent lawsuits filed by families against Google and Character.AI - settled in January - kept student mental health and AI safeguards in the spotlight. The message for schools is clear: teach discernment, set boundaries, and give students safe ways to ask for help.

What this means for schools

  • Refresh digital citizenship units to include scams, AI-generated media, and content credibility checks.
  • Integrate across subjects: ELA for source evaluation, science for data ethics, social studies for media literacy, technology for AI use cases and limits.
  • Stand up clear reporting channels for bullying, threats, and impersonation - and teach students how to use them.
  • Provide teacher PD on AI basics, student privacy, and age-appropriate use.
  • Engage families with short guides and workshops so home habits support school goals.

Practical lesson ideas you can run this term

  • Scam spotting clinic: Students analyze real phishing emails, deepfake voice samples, and fake giveaways; build a class checklist for "trust signals."
  • Misinformation lab: Compare headlines, reverse image search, and trace original sources; have students rate confidence and explain their reasoning.
  • AI chatbot critique: Use a school-approved chatbot to answer a research prompt; students fact-check outputs and label errors, omissions, and bias.
  • Privacy settings workshop: Students audit settings on a mock profile; discuss data brokers, location settings, and default opt-ins.
  • Algorithm awareness: Map how "watch next" or "for you" feeds work; design personal guardrails (time limits, topic filters, unsubscribe habits).
  • Report-and-support drill: Walk through how to document harassment, save evidence, report, and seek adult help.

Implementation tips for leaders

  • Form a cross-grade team (tech, ELA, counseling, library) to co-own content and pacing.
  • Vet AI tools for data collection, age gates, and export controls; prefer school-managed accounts.
  • Adopt "private by default" settings; teach students to opt out of data sharing where possible.
  • Publish short, plain-language guidance for teachers and families on approved tools and dos/don'ts.
  • Schedule two checkpoints per year to update examples and reflect on incidents or near-misses.

How to measure progress

  • Pre/post checks on scam detection and source evaluation skills.
  • Trends in digital conduct reports, resolution time, and student follow-through.
  • Family feedback on confidence with privacy settings and spotting AI-altered media.
  • Student portfolios: claim-evidence reasoning, chatbot critiques, and privacy audits.
  • Teacher PD completion and classroom adoption rates of core practices.

What educators are saying

Supporters see a multiplier effect. "As children learn this, they're going to teach their families," Gruber said. "You can't tell if a photo is real or not - this is critical for our children to learn, and for their parents and grandparents at home."

Resources

Bottom line: HB 171 pushes schools to teach the internet as it is, not as it was. Start with concrete skills, keep examples current, and bring families along. That's how safety education sticks.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)