Adopt or ban? What 1,366 Italian students really think about GenAI in higher education
An Italy-wide survey of 1,366 students from 24 universities (Oct-Dec 2023) offers a clear signal: students use AI, they worry about ethics, and they want regulation - not bans.
If you work in education, the takeaway is simple. Bring GenAI into the open with clear rules, assessment redesign, and staff training. Quiet ambiguity only feeds misuse and fear.
Key stats you can use
- Usage gap: 69.2% used GenAI for personal tasks; 38.7% said they used it for academic work. When asked about specific modes of use, that academic figure rose to 48.8% - a likely sign of underreporting.
- Gender gap: Men reported higher usage than women for both personal (+24.8%) and academic (+17.3%) tasks.
- Age and skills: Younger and more digitally skilled students use GenAI more. Higher digital skill also correlated with lower trust in AI's accuracy.
- Fields: Highest academic use in political/social/communication studies (61.1%), then computer science/ICT (52.7%), engineering, art and design. Lower use in humanities (languages, psychology, education, law, literature).
- Morality: 58.3% agreed that using ChatGPT for tasks/exams is morally wrong; 37.2% said it "defeats the purpose of education."
- Policy sentiment: 81.4% want universities to regulate AI use; 81% oppose bans.
- Perceptions of impact: Students expect AI to become "the new normal" (87%), see benefits in information retrieval and comprehension, but worry about critical thinking.
- Personal vs societal risk: Low concern about AI harming their own education/careers; higher concern about societal impact and future jobs.
- Faculty behavior: About 85% of instructors showed no explicit classroom stance on AI. Only ~7% demonstrated live use. More activity (guidance or critique) appeared in political/social/communication and art and design - not the obvious STEM areas.
What this means for educators
Students are using GenAI now, mostly to support parts of an assignment - not to submit outputs untouched. Many feel conflicted about ethics and critical thinking. They're asking institutions to set guardrails.
Absence of guidance doesn't reduce use. It just pushes it underground.
Practical policy: regulate, don't ban
- Disclosure: Require a short "AI use note" on submissions describing tools used, prompts, and how outputs were revised.
- Attribution: Cite AI assistance where relevant (e.g., "Generated a first draft outline; rewritten by me"). Students remain responsible for accuracy and sources.
- Allowed uses (examples): brainstorming, outlines, language polish, code comments, sample test cases, practice quizzes.
- Prohibited uses (examples): submitting AI output as-is, fabricating citations/data, evading assigned readings, or using tools where explicitly disallowed.
- Verification: Keep the right to ask for process artifacts (notes, drafts, prompt logs) or oral follow-ups.
- Privacy and data: No uploading personal or sensitive data. Clarify approved tools and settings.
- Equity: Ensure alternative workflows for students who cannot or choose not to use AI.
- Review cycle: Revisit policy each term; tools and norms change fast.
Assessment ideas that still reward thinking
- Process-first grading: Grade research trails, drafts, and reflection on AI prompts and revisions.
- AI-in-the-open tasks: Allow AI use, then assess critique, verification, and improvement of outputs.
- Oral defense: Short, structured viva or screenshare walkthrough of decisions.
- Unique inputs: Use local data, live cases, or rotating prompts that generic models won't nail.
- Closed-resource sprints: In-class reasoning tasks to practice core skills without tools.
Teaching with GenAI without lowering standards
- Show good vs poor prompts and how tiny prompt tweaks change outcomes.
- Compare outputs with sources; trace errors, bias, and missing context.
- Require citation checks and "evidence chains" to validate claims.
- Use AI to produce drafts; students must revise, justify choices, and reflect on trade-offs.
Close the gender and field gaps
- Offer low-pressure, opt-in labs that focus on practical tasks students already do (summaries, feedback, problem setup).
- Use peer demos across disciplines; highlight domain-relevant use cases (e.g., qualitative coding aids, multilingual writing support).
- Make ethical use explicit: what's allowed, what's not, and why - with examples.
- Invite students to critique AI on topics they care about; skepticism improves judgment.
Sample syllabus language (copy/paste)
- You may use GenAI for brainstorming, outlining, and language polish unless I specify otherwise.
- You must include an "AI use note" listing tools, prompts, and how you changed the output.
- You are responsible for all content. Verify facts, citations, and methods.
- No fabricated references or data. No submitting AI output as-is.
- Protect privacy. Do not share personal, confidential, or assessment content with external tools.
- If asked, be ready to show drafts, notes, and your prompt history.
What students believe AI helps and hurts
- Helps: finding information, understanding complex ideas, language improvement.
- Hurts: perceived risk to critical thinking and original thought if used as a shortcut.
- Reality check: Students see stronger benefits when usage is intentional and combined with verification and reflection.
Faculty development (quick start)
- Run a 60-minute workshop: 20 min demo, 20 min critique, 20 min redesign an assignment with AI-in-the-open rules.
- Share a one-page policy and a three-scenario "what's allowed" guide for your department.
- Pilot in two courses, gather student reflections, iterate next term.
Further reading and resources
- UNESCO guidance on AI in education
- EDUCAUSE: AI in teaching and learning
- Complete AI Training: courses by job
Bottom line
Students want clarity, not crackdowns. Set transparent rules, design for thinking, and teach verification. You'll reduce misuse, keep standards high, and make space for genuine learning - with or without AI.
Your membership also unlocks: