Providing AI training leads to more critical and ethical use by university students
Generative AI is no longer a novelty on campus. A new study from the Universitat Oberta de Catalunya (UOC) shows that focused training plus guided debate helps students use these tools with more rigour, honesty and awareness.
The research, published in open access in the journal RIED-Revista Iberoamericana de Educación a Distancia, worked with 929 students in an interdisciplinary digital skills course. One cohort completed a baseline questionnaire; a second cohort received GenAI-focused learning materials and took part in structured debates, then completed the same questionnaire. The comparative data points in one direction: clear information and reflective discussion increase perceived knowledge and the ability to make informed choices.
What moved the needle
The improvement didn't come from lectures alone. It came from pairing practical examples of how GenAI tools work with a space to test ideas, debate trade-offs and pressure-test decisions.
- Learning resources showed concrete use cases, limitations and appropriate contexts.
- Guided debates pushed students to apply critical thinking to academic honesty, authorship and verification.
- Outcomes included stronger commitments to cite help appropriately, avoid over-reliance and check AI-generated information.
One gap remained: data protection and legal considerations. Many students underestimated the risks or assumed they weren't affected. This is a clear opportunity for improvement in future iterations.
What predicts GenAI knowledge
Age wasn't the key factor. Field of study was. Students' degree programmes correlated more with exposure and familiarity than their age. The takeaway for educators: adapt training to each discipline so examples match real tasks and career paths.
How to implement this model in your course or school
- Run a quick baseline survey on use, trust, academic honesty and verification habits.
- Offer a short primer on how GenAI works, its limits (hallucinations, bias) and privacy considerations.
- Design activities that require using GenAI on defined tasks, comparing outputs and reflecting on what helped or misled.
- Host structured debates: what's acceptable help, what requires citation, where does human judgement stay non-negotiable.
- Embed verification steps: require sources, cross-checking and a brief audit trail of prompts and edits.
- Add micro-lessons on data protection and legal basics (terms of use, consent, storing prompts and outputs).
- Assess the process, not just the product: reward reflection, verification, and appropriate attribution.
- Support teaching staff with clear guidance and examples they can reuse.
- Collect the same metrics later and compare cohorts to see what's working.
Ready-to-use activity ideas
- Compare-and-critique: Students prompt a GenAI tool, peer-swap outputs, and mark factual errors and missing citations.
- Policy in practice: Draft a short "AI use policy" for a specific assignment, then stress-test it with edge cases.
- Debate with roles: Pro, con, and ethics lead discuss a scenario (e.g., AI outlines vs. AI writing full drafts) and agree on clear boundaries.
- Bias check: Run the same prompt across models and examine differences, biases and transparency.
What's next
The model is already being applied in UOC courses with explicit AI-use assignments and shared reflection, and secondary schools have requested adaptations for their settings. Future research will look at how GenAI affects assessment and how teaching strategies evolve as these tools become standard.
Resources
- Open-access study: RIED-Revista Iberoamericana de Educación a Distancia (DOI)
- Plan staff development: AI courses by job role - Complete AI Training
The core message is simple: GenAI is here, and students will use it. Our job is to teach them to use it critically, ethically and with their own judgement front and center.
Your membership also unlocks: