AI Is Changing How Students Learn-Higher Ed Must Keep Up

Higher ed's AI moment is here: current assessments miss the mark. Redesign for process, transparency, and ethics so students learn to think with tools-and prove how they got there.

Categorized in: AI News Education
Published on: Nov 19, 2025
AI Is Changing How Students Learn-Higher Ed Must Keep Up

Higher Education's AI Moment: Transform Learning Now

On 17 November, Lingnan University's Teaching and Learning Centre and The University of Sydney's Educational Innovation team co-hosted a seminar focused on the transformation of global higher education learning pedagogies in light of fast-moving innovation and technology. Around 300 participants joined in person and online.

The takeaway was clear: current teaching and assessment approaches aren't built for AI-enabled learning. If we don't adapt, we'll assess the wrong things, reward shallow outputs, and miss the core skills students actually need.

Why the shift can't wait

Prof Frankie Lam (Lingnan University) underscored the gap between traditional methods and what students face today. AI is already embedded in study habits and professional work. Pretending otherwise creates integrity problems and weakens learning.

The path forward is to redesign assessment so it values thinking, process, and ethical use of AI-rather than penalizing students for using the same tools employers expect.

From final product to the full learning journey

Prof Jen Scott Curwood (The University of Sydney) presented a process-first approach to assessment. Instead of grading a polished end product, educators evaluate the trail of learning: decisions, drafts, tool use, and reflections.

  • Require transparent AI-use statements: what was used, why, and how it changed the work.
  • Collect process evidence: prompts, iterations, notes, and drafts (text, audio, or video).
  • Make reflection non-negotiable: critique AI outputs, identify errors or bias, and justify acceptance or rejection.
  • Use oral defenses and live reviews: verify authorship, probe decision quality, and reinforce accountability.
  • Grade judgment and methodology: problem framing, source quality, verification steps, and ethical reasoning-alongside outcomes.

Bring non-STEM students into tech problem-solving

Prof Albert Ko (Lingnan University) showed how students from any discipline can apply technology to humanitarian and social challenges. The message: don't silo AI literacy to computing programs. Society needs cross-disciplinary problem solvers.

  • Short, high-impact sprints: define a community problem, use no-code tools or data sources, ship a workable concept in two weeks.
  • Evidence over hype: require baseline data, measure change, and compare before/after.
  • Ethics labs: privacy, consent, accessibility, and cultural context built into project reviews.
  • Partner with NGOs or local agencies: authentic constraints, real stakeholders, useful feedback.

Rebuild core graduate attributes with AI in the loop

As AI tools become widely accessible, higher education should use them to improve outcomes and accelerate development of essential graduate attributes.

  • Critical thinking: students compare AI-generated analysis with human analysis, stress-test claims, and document verification.
  • Communication: concise briefs, multi-format explanations, and audience-appropriate tone with AI as a drafting assistant-always cited.
  • Collaboration: shared AI notebooks or design boards with peer reviews of strategy, not just final outputs.
  • Social responsibility: projects must address a defined need, show evidence of utility, and reflect on risks and unintended effects.
  • AI literacy: model selection, prompt strategy, bias awareness, data stewardship, and limits of automation.

A 90-day action plan for program leaders

  • Weeks 1-2: pick two core courses to pilot process-first assessment; set clear AI-use guidelines and citation rules.
  • Weeks 3-4: redesign one major assignment to capture process artifacts; add an oral checkpoint or live demo.
  • Weeks 5-6: co-create a rubric that scores reasoning, verification, and ethical use alongside outcomes; test with a small cohort.
  • Weeks 7-8: run staff workshops on AI literacy, academic integrity with AI, and feedback efficiency.
  • Weeks 9-10: gather student and marker feedback; compare learning depth, time cost, and integrity signals.
  • Weeks 11-12: refine, document, and scale to additional courses; share exemplars and quick-start packs.

Integrity and safety guardrails

  • Mandatory AI-use disclosure and citation in every submission.
  • Privacy rules: no uploading personal data or restricted content; prefer institution-approved tools.
  • Assessment design that makes invisible help visible: checkpoints, drafts, and reflective memos.
  • Fairness checks: do not penalize students for disclosing AI; reward judgment and verification.
  • Accessibility: ensure alternatives for students with limited tool access or connectivity.

What this means for academic leaders

Waiting for a perfect policy will cost a full intake of students. Start with a small set of courses, make AI use transparent, and grade thinking. Track what improves learning and replicate it.

Seminars like this make one thing obvious: the institutions that adapt fastest will deliver graduates who can think with tools, not just around them.

Helpful resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)