CMS unveils draft AI classroom policy with guardrails, audits, and human oversight
CMS drafts an AI policy with vetting, training, privacy safeguards, and student citation rules. Humans keep final say on placements, discipline, and evals; vote in October.

CMS proposes AI policy: enhance learning, keep humans in charge
Charlotte-Mecklenburg Schools introduced a draft policy to bring artificial intelligence into classrooms with clear guardrails. The board held a first reading and will take public comment before an October vote. A yearly review is recommended to keep the policy current.
"The policy highlights the importance of the responsible and ethical use of AI within our district as our students and staff learn to work with this emerging technology," Board Vice Chair Dee Rankin said. It sets expectations for privacy, accessibility, and bias mitigation across all tools and use cases.
What's in the draft
- Central review before use: Every AI tool-commercial or built by CMS-must be vetted by a new AI Review Committee formed by Superintendent Crystal Hill or her designee.
- Clear evaluation criteria: Approval depends on curriculum fit, data privacy, security and ethics, system compatibility, and bias risks.
- Ongoing oversight: Approved tools face regular audits and performance tracking. The draft does not specify audit frequency.
- Training and transparency: Mandatory training for students, teachers, and staff. CMS will maintain a central repository of all AI systems, user guides, and training materials, and provide resources to help teachers integrate AI. Parents will be notified about AI use and its impact.
- Student use rules: AI may be used only for authorized educational purposes. Students must cite AI when used. Violations can lead to loss of AI access and discipline per the Student Code of Conduct.
- Human judgment required: No final decisions may be made by AI alone, including academic placement, special education eligibility, discipline, employee evaluations, or employment.
- Data protection: No uploading private information to public generative AI without CTO approval. Do not place confidential data in CMS AI systems unless those systems are approved for that purpose. In the event of a privacy incident, the district will promptly notify affected individuals and authorities.
Why this matters for educators
The district is signaling a green light for AI, paired with rules that protect teaching quality, student rights, and staff accountability. This gives schools room to innovate while keeping people-teachers, counselors, and administrators-responsible for final calls.
Expect new approval workflows, stronger data practices, and explicit guidance on student use and citation. Getting ahead now will make adoption smoother once the policy is approved.
What to do now
- Audit your tools: List every AI product in use (or planned). Pause unapproved tools and prepare submissions for the AI Review Committee.
- Map to curriculum: Document how each tool supports standards, instructional goals, and assessment. Note accessibility features and potential bias risks.
- Lock down data: Work with IT to block uploads of private data to public AI services. Use only CMS-approved systems for any student or staff information.
- Set class rules: Write clear guidelines on when AI is allowed, how to cite it, and what counts as misuse. Align these with the Student Code of Conduct.
- Plan PD: Identify staff champions, build short training modules, and schedule ongoing support. Consider role-based learning paths for teachers, specialists, and support staff. For structured options, see curated AI courses by job at Complete AI Training.
- Prepare parent comms: Draft notification templates that explain where AI is used, data practices, and benefits for learning. Include how families can ask questions.
- Define audits: Choose simple metrics: learning outcomes, student engagement, time saved, and equity checks. Set a review cadence even if the district timeline is still pending.
- Reinforce human oversight: Create sign-off steps for placement, IEP decisions, discipline, and staff evaluations to ensure people make the final decision.
Helpful references
- FERPA guidance from the U.S. Department of Education for privacy and parental rights.
- NIST AI Risk Management Framework for evaluating risk, bias, and oversight practices.