Tsinghua University Issues First Campus-Wide Guidelines on AI Use in Education
Tsinghua University has released its first comprehensive framework for using AI in teaching and academic research. It sets a clear bar for how AI fits into courses, supervision, and scholarly work across the campus.
The document reflects a three-layer, decoupled approach to AI in education and is organized into three parts: General Provisions, Teaching and Learning, and Theses, Dissertations, and Practical Achievements. It focuses on practical rules that educators and students can apply right away.
What's in the framework
The guidelines establish multi-level norms for faculty, students, and researchers as AI becomes part of learning and idea generation. AI is treated as an aid - not a substitute for teaching, thinking, or original work.
General Provisions: Five Core Principles
- Responsibility: AI supports learning; teachers and students lead the process.
- Compliance and integrity: Disclose AI use. Misuse is prohibited.
- Data security: Protect personal, research, and institutional data.
- Prudence and critical thinking: Verify outputs with multiple sources to avoid AI-induced errors and complacency.
- Fairness and inclusiveness: Address algorithmic bias and avoid widening digital divides.
Teaching and Learning: What Changes
- Instructors define how AI may be used in each course and explain the rules at the start of the semester.
- Faculty are responsible for any AI-generated materials they incorporate.
- Students may use AI as a learning aid, but copying or mechanically rephrasing AI output for assignments is banned.
Theses, Dissertations, and Practical Achievements
- AI cannot replace academic training, independent thinking, or original research.
- Ghostwriting, plagiarism, fabrication, and related misconduct are explicitly prohibited.
- Advisors must guide appropriate AI use and maintain oversight to ensure originality and academic integrity.
How educators can put this to work now
- Publish an AI policy in every syllabus: allowed tools, permitted tasks, disclosure format, and consequences.
- Add a disclosure line to assignments and thesis chapters (e.g., tools used, prompts, where and how outputs were applied).
- Design assessments that require process evidence: drafts, notes, citations, and reflections on AI's role.
- Use multi-source fact checks for references, code, and data summaries produced with AI.
- Set data boundaries: do not upload unpublished research, student records, or confidential materials to external tools.
- Provide alternatives and guidance to reduce access gaps and monitor for bias in AI-supported activities.
Why this matters
Generative tools are now common in classrooms and labs. This framework shows a practical path: keep humans in charge, disclose use, protect data, and hold the line on originality and integrity.
For institutional context, see Tsinghua University.
If your team needs structured upskilling on safe, effective classroom use of AI, explore role-based options at Complete AI Training.
Your membership also unlocks: