Edufair 2025: Outthinking AI Is the Next Big Skill for Students
AI makes answers easy. The hard part is judgment. At Gulf News Edufair 2025, educators agreed: the core skill now is the ability to question, test, and refine what both humans and machines produce.
Dr S. Sudhindra (Manipal Academy of Higher Education), Jaspreet Singh Sethi (Middlesex University Dubai), and Dr Nidhi Sehgal (Curtin University Dubai) called for teaching that forces deeper thinking, reworked assessments, and clear toolkits that make reflection unavoidable.
Why judgment is the new literacy
Facts are free. Discernment is scarce. Students need to gauge accuracy, bias, and intent-then add a human perspective.
As Sethi put it, "AI can already recall facts better than us. We need to scaffold teaching so that students can leverage AI to cover the basics, and then reflect on what AI generates and add a human perspective."
The toolkit: structure beats slogans
"We can't simply preach," said Dr Sudhindra. "What is required is a toolkit-tools that enable students to reflect and question as a natural part of learning."
- Scaffolded prompts: Ask students to use AI for first-pass ideas, then require critique, comparison, and revision with cited sources.
- Process-first grading: Mark the thinking trail: drafts, annotations, prompts used, and changes made-more weight on decisions than final polish.
- Journals and peer reviews: Short, frequent entries and feedback cycles to surface assumptions and blind spots.
- Assessment redesign: Fewer recall tasks, more transfer, synthesis, and live reasoning where students must explain how they reached an answer.
Build reflection into the day, not the semester
Reflection should be routine, not a capstone. Sethi suggested quick, practical activities: peer reviews, draft evaluations, and guided discussions that force a pause before submission.
Dr Sudhindra highlighted simple tools: "Journaling or critical discussion with peers brings the right questions to the table. Make it deliberate."
Sehgal shifts how work is judged: "I'm less interested in what they wrote and more in how their learning and thinking changed." Her go-to is the "3-2-1" after an AI-assisted task: three insights, two doubts, one example. It pushes students from reproduction to reasoning.
Bias vs. hallucination: teach the difference
AI feels confident even when wrong. "AI has a tendency to provide more of whatever conforms to your point of view," said Dr Sudhindra. Students should approach outputs as hypotheses to test, not facts to accept.
- Bias: Systematic tilt in data or prompts that skews results.
- Hallucination: Fabricated content stated as if true.
Sethi noted that Middlesex University Dubai now runs an AI literacy course so students know how models work, what data they draw on, and how to question outputs. This baseline helps them spot bias and false claims early.
For policy and guardrails, see UNESCO's guidance for generative AI in education and research: UNESCO guidance. For risk framing, see the NIST AI Risk Management Framework.
Interdisciplinary exposure widens judgment
Sethi emphasized formats that mix perspectives when curricula are rigid: guest talks, open forums, and competitions with mixed-discipline teams. These settings reveal how the same data can support different conclusions.
Sehgal argued this work must live across programs, not in one-off workshops. It touches epistemology-how we know-and psychology-how we think. That integration creates durable habits of inquiry.
What to change now
- Default to "AI as hypothesis." Require students to verify AI outputs with sources and explain their verification steps.
- Make reflection visible. Use 3-2-1 checkouts, prompt logs, and rationale statements with every major task.
- Assess reasoning, not recall. Viva-style defenses, whiteboard problem solving, and comparative critiques over fact regurgitation.
- Teach prompt strategy. Show how neutral framing, counter-prompts, and source requests reduce bias and expose gaps.
- Run short AI literacy modules. Cover model limits, data provenance, bias, hallucination, and safe classroom use.
- Create mixed teams. Pair technical and non-technical students to build balanced judgment.
Bottom line
Outthinking AI is teachable. It requires structure, not slogans; reflection, not faith in outputs; and assessments that reward clear thinking.
As Sehgal summed up, students shouldn't be naive believers or cynical disbelievers. They should learn to hold the tension, test claims, and choose with care.
Tools and training for educators
- Plan a two-hour AI literacy starter pack: model basics, bias vs. hallucination, prompt patterns, verification drills.
- Build a shared prompt-and-citation template so every student submits their process.
- Explore course options for staff upskilling: AI courses by job role for curriculum leads and teaching teams.