Tsinghua University sets a clear framework for AI in education
Tsinghua University has released a comprehensive framework that treats AI as an assistant, not a substitute, in academic work. It draws a firm line against ghostwriting, plagiarism, fabrication, and submitting AI output as one's own - especially for graduate students, who are reminded that AI cannot replace the intellectual labor required for their training.
The document assigns responsibility where it matters: instructors set boundaries, students disclose and verify, and supervisors provide oversight. The goal is practical: integrate AI to improve teaching and learning without compromising integrity.
What's allowed - and what's off-limits
- Red lines: No ghostwriting, plagiarism, or fabrication. No submitting AI-generated text, code, or outputs as original work. No use of sensitive or unauthorized data with AI tools.
- Transparency: All academic use of AI must be properly disclosed.
- Verification: Treat AI outputs as drafts that require multisource verification and critical review. Watch for "hallucinations."
- Green lights: Use AI as an auxiliary tool within course-defined boundaries - for ideation, feedback, explanations, drafting, or structured practice - with human judgment in the loop.
Roles and responsibilities
Instructors: Determine where AI fits based on course goals. Set clear rules, explain why they exist, and supervise any AI-generated teaching materials.
- Publish a course-specific AI policy (what's encouraged, restricted, and prohibited).
- Require disclosure statements on assignments that describe tools used and how outputs were verified.
- Model critical AI use: demonstrate checks for bias, errors, and sources.
Students: Treat AI as an aid, not a shortcut. You are responsible for the accuracy, originality, and integrity of your work.
- Do not submit AI output as your own. Use it to think better, not to avoid thinking.
- Disclose tools, prompts, and where AI shaped your process. Verify with multiple sources.
- Avoid feeding personal, sensitive, or restricted data into AI tools.
Supervisors: Provide clear guidance on acceptable use and maintain oversight to protect originality. Review disclosure practices and data-handling decisions.
How the guideline was built
The framework draws on a global review of 70 AI-in-education documents from 25 universities and interviews with more than 100 students and instructors, led by faculty at Tsinghua's School of Education. It builds on campus-wide experience integrating AI across 390+ courses in 10 areas, including AI learning companions and teaching assistants.
Leaders describe the framework as a "living system" that sets firm red lines while highlighting green lights for responsible experimentation. Innovation is encouraged - with structure, transparency, and accountability.
Apply this framework on your campus
- Create a one-page AI policy for every course: purpose, allowed uses, disclosure format, verification steps, red lines, and consequences.
- Add a mandatory "AI use" section to assignment templates: tools used, prompts, what was accepted or edited, and how it was verified.
- Adopt a data-safety checklist: no sensitive or unauthorized data; review tool privacy policies; prefer institution-approved tools.
- Build a verification habit: cite sources, cross-check facts, and use multiple references for claims and code.
- Teach "hallucination" handling: require students to annotate AI outputs with confidence notes and source links.
- Supervise AI-generated teaching materials: run spot checks, keep revision logs, and confirm rights for any datasets used.
- Recognize exemplary use: showcase assignments where AI support is disclosed, verified, and meaningfully integrated.
Why this matters for educators
This approach respects the craft of learning and research. It gives educators a workable structure to improve productivity and feedback while keeping the core work - thinking, analysis, originality - firmly human.
Further reading and tools
Your membership also unlocks: