AI in Singapore Universities: Low Cheating Cases but Higher Vigilance and Calls for Smarter Assessments
Singapore universities report low cases of AI cheating but remain vigilant after incidents. They encourage responsible AI use while adapting assessments to uphold academic integrity.

Is AI Cheating on the Rise in Singapore Universities?
Universities in Singapore report that cases of students using generative AI tools to cheat remain low. After a recent incident at NTU where three students received zero marks for AI-generated assignments, academic staff have become more alert. While AI use is generally allowed to some extent, educators emphasize the importance of maintaining academic integrity and adapting assessments accordingly.
Current AI Policies Across Universities
All six public universities in Singapore permit the use of generative AI tools, with rules varying by module and coursework. Students must declare when and how they use AI to ensure transparency. Over the past three years, Singapore Management University (SMU) and Singapore University of Technology and Design (SUTD) have recorded only a handful of AI-related misconduct cases, mostly involving plagiarism. Singapore University of Social Sciences (SUSS) has noticed a slight increase, likely due to heightened detection efforts.
Other institutions, including NUS, SIT, and NTU, have not publicly shared detailed data on AI misuse but acknowledge AI’s permanence in education. They are exploring ways to integrate AI tools meaningfully while preserving academic standards.
How AI Use Is Managed in Coursework
Faculty have flexibility in deciding AI usage within university guidelines. For example:
- NUS: AI is allowed for take-home assignments if properly cited; assessments are designed to minimize AI overdependence.
- SMU: Instructors specify permitted AI tools and guide students on appropriate usage, mainly for brainstorming and research.
- SIT: Encourages AI in advanced courses like coding but restricts it in foundational modules.
- SUTD: Integrates AI into design thinking to promote critical use as a tool or partner rather than a crutch.
The focus across universities is on ensuring students produce original and credible work, even when AI assists them.
The Student Perspective on AI Usage
Students acknowledge widespread AI use but often distinguish between appropriate and inappropriate applications. Many use AI for brainstorming, research, or grammar checks rather than outright content generation. Some view AI as a “smart study buddy” that clarifies concepts, while others rely on it for efficiency in less critical modules.
However, several students express caution, emphasizing self-discipline and learning from mistakes. They recognize that over-reliance on AI could undermine their academic growth and take pride in producing authentic work.
Calls for More Creative and Transparent Assessments
Educators suggest rethinking assessment methods to address AI challenges. For instance, SMU’s Associate Professor Seshan Ramaswami encourages students to disclose AI use and critically evaluate AI outputs. He also employs AI to generate quizzes and chatbots for student queries while warning against blind trust in AI-generated content.
Possible assessment innovations include:
- Assignments based on local contexts
- Oral exams to test depth of understanding
- In-class discussions with limited device use
These approaches aim to assess original thinking and comprehension beyond AI’s current capabilities.
Research fellow Dr. Thijs Willems highlights the importance of originality, sophistication in AI prompting, and human judgment. He values tools like reflective journals, prompt logs, and peer feedback to showcase critical engagement with AI.
SUSS’s Associate Professor Wang Yue sees AI as a way to shift focus from basic skills to higher-order thinking, preparing students for real-world challenges.
Why Critical Thinking Matters More Than Ever
The quick answers AI provides can be tempting but risky. Dr. Willems warns that treating AI as a “one-click answer engine” can lead to mediocre work and weaker understanding. The key is for learners to critically engage with both AI and their subject matter.
Dr. Jean Liu from NUS emphasizes clear boundaries between acceptable AI use and academic dishonesty. AI should support learning as a tutor or mentor, not replace student effort.
Similarly, Dr. Jason Tan from the National Institute of Education points out that AI tests student integrity and responsibility. Over-reliance may erode critical thinking, so students must decide what they want from their education.
Conclusion
Singapore’s universities are adapting to the growing presence of generative AI by balancing its benefits with the need for academic honesty. The focus is on guiding students to use AI responsibly, redesigning assessments to measure genuine understanding, and fostering critical thinking skills that AI cannot replace.
For educators looking to deepen their understanding of AI’s role in learning and assessment strategies, exploring specialized courses on AI tools and ethics can be valuable. Resources such as Complete AI Training’s latest AI courses offer practical insights tailored for education professionals.