73% of Canadian students use AI for schoolwork. Many worry it erases who they are.
College students across Canada are adopting generative AI at scale, but a growing concern cuts deeper than academic integrity policies: they fear the tool is making their work unrecognizable as their own.
A KPMG Canada survey of 684 students found that 73% use generative AI for schoolwork, with nearly half saying it is their "first instinct." Yet many report unease about the practice. The tension isn't primarily about cheating-it's about identity.
Students observe that AI-generated drafts read well and often earn better grades. But they don't sound like them. One student described it plainly: "It's better writing, yeah, it sounds good and helps get a better grade. But it's kinda generic. Like anyone could've written it, not just me."
Writing is how students position themselves as professionals
In STEM fields especially, writing does more than convey information. It's where students build narratives about who they are and where they belong in their chosen field. Through assignments, students undertake what researchers call "identity work"-they explore how to present themselves as emerging professionals and test whether they fit within disciplinary communities.
STEM programs operate with implicit rules about what counts as credible and legitimate. Voice and tone in writing signal that a student has internalized those rules and can speak as an insider. When AI smooths that voice into something generic, it can feel like self-erasure.
This matters more for some students than others. Those already uncertain about belonging-including many women and students from underrepresented backgrounds in STEM-may question whether their success is authentic if a tool did the writing.
Institutions are still figuring out policy
Canadian post-secondary schools are developing AI policies that balance flexibility with oversight. Most allow limited AI use while requiring disclosure and addressing risks like fabricated citations and bias. But enforcement remains unclear, leaving students to guess what's permitted and what constitutes their own work.
UNESCO has warned that AI systems can shape how knowledge is produced and expressed, raising questions about human agency. Canadian policy discussions echo the same concern: AI may assist writing, but it also changes how voice is expressed and how people think about themselves.
What educators can do
Rather than focus only on detecting misuse, instructors can redesign assignments to make student thinking visible. A few practical shifts:
- Ask students to explain how they used AI in their work
- Have them compare an AI-generated paragraph with their own and discuss what changed in tone, clarity, and reasoning
- Request revision of AI-polished text so it reflects the student's own thinking
- Identify where their interpretation and uncertainty matter to the work
These moves treat writing in one's own voice as a skill worth developing, not an obstacle to overcome. They also acknowledge that AI is here. The question is whether classrooms will help students use these tools without losing their voice, their agency, and their sense of belonging.
For writers and communicators working in education or STEM, understanding this shift matters. The conversation is moving beyond whether students should use AI to how they can use it while staying authentic.
Your membership also unlocks: