Over 500 students caught using AI to cheat as Irish colleges struggle to set the rules

Over 500 Irish students were flagged for unauthorised AI use; true figures may be higher and detectors fall short. Colleges need clear rules, process checks, and fair penalties.

Categorized in: AI News Education
Published on: Jan 12, 2026
Over 500 students caught using AI to cheat as Irish colleges struggle to set the rules

Over 500 students flagged for unauthorised AI use: what educators need to know

More than 500 students in Irish higher education were found to have used AI in an unauthorised way in graded coursework in 2024-2025. The true figure is likely higher because many universities group AI misuse with plagiarism and do not track it separately.

The Higher Education Authority does not require separate AI tracking, but the duty to protect academic integrity remains. The message is clear: AI isn't a loophole to close once-it's a capability to govern, teach, and assess against.

What institutions reported

  • University of Galway: 224 cases. With ~20,000 students, the university notes it's likely underreported.
  • TU Dublin: 71 cases across multiple faculties; outcomes varied by assessment type.
  • National College of Ireland: 68 cases; repeat misuse risks suspension for the rest of the year.
  • Dundalk IT: 43 cases out of ~5,000 students.
  • Royal College of Surgeons: 36 cases.
  • St Patrick's Carlow College: 46 cases; mostly grade penalties, some fails with capped resubmission.
  • Mary Immaculate College: Coursework confirmed to use AI illegally received an F; resits allowed with caps.
  • Some larger universities did not centralise AI-specific data; many leave handling to individual schools.

Detectors aren't the answer

According to the National Academic Integrity Network, AI detectors are not recommended and can generate false positives. The AI Advisory Council agrees that detection methods are unreliable.

The University of Galway relies on subject experts to spot concerns, followed by a one-to-one student conversation. It's labour-intensive, but repeat cases are low-suggesting coaching beats a purely punitive approach.

"The introduction of generative AI has changed higher education," said the university's Academic Integrity Officer, Dr Justin Tonra. "We're seeing more cheating facilitated by the technology. That's a challenge we can't ignore."

Sanctions vary-and that's part of the problem

Penalty frameworks differ by institution and even by faculty. TU Dublin saw outcomes depend on the assessment. Mary Immaculate issued Fs with capped resits. St Patrick's Carlow College used grade penalties, failed assignments, and capped resubmissions. The National College of Ireland warned repeat offenders could face suspension.

In short: inconsistent rules create confusion for students and workload for staff. A consistent, published framework is overdue.

Students want clarity, not guesswork

Student leaders say unclear AI policies are a recurring issue. In some modules, AI is encouraged; in others, it's banned-without concrete guidance. That inconsistency leads to accidental breaches.

Students also raised sustainability and ethics concerns. One psychology student said the environmental impact and ethics aren't discussed enough. A commerce student argued AI should be taught because employers will expect it. Another student questioned grading an "AI-use" assignment that required little skill to operate the tool.

As one student put it: pushing AI out of university is "like a trade school trying to push a carpenter away from a saw." The tool isn't the problem. Lack of learning is.

What educators can do now

  • Publish a clear AI policy in every module. Define what is allowed, what must be disclosed, how to cite AI outputs, and what is prohibited. Make it visible in the syllabus and assessment briefs.
  • Design for process, not just product. Use staged drafts, version history, in-class writing checks, oral defenses, and reflective memos explaining decisions and sources. Unique datasets, local case work, and personal artefacts reduce generic AI use.
  • Teach AI literacy at the right stage. As Professor Michael Madden notes, introduce generative AI once students understand core principles and can judge tool use. Focus on reasoning, structure, and verification-not just output creation.
  • Replace detectors with human-led checks. Train staff to spot telltales (inconsistent voice, unverifiable claims, mismatched references). Use short viva-style conversations to confirm authorship.
  • Standardise consequences. Create a campus-wide sanctions matrix with proportional penalties for first vs repeat offences. Keep a central record that distinguishes AI misuse from other plagiarism.
  • Coach first offenders. Provide quick-turn workshops on citation, acceptable AI use, and integrity. Share exemplars of compliant AI-supported work.
  • Address ethics and sustainability. Discuss model limitations, bias, privacy, and environmental impact. Encourage students to justify when AI helps and when it harms learning.
  • Update rubrics. Reward evidence of thinking: problem framing, sources, drafts, reasoning steps, and reflection on tool use. Make "undisclosed AI" a clearly penalised criterion.
  • Communicate often. Align program and faculty messaging. Remind students before major assessments. Keep FAQs current.
  • Invest in staff development. Give lecturers practical training to design AI-resilient assessments and teach ethical AI use. If you need structured options, see curated AI courses by job.

Sample syllabus clause you can adapt

"AI tools may be used for brainstorming, outlining, and code scaffolding only if disclosed. You must cite the tool and prompts used, and verify all content and references. Submitting AI-generated work as your own without disclosure is academic misconduct and will be penalised under the university's policy."

How to measure progress

  • Track AI-related cases separately from other misconduct.
  • Review patterns each term: modules, assessment types, repeat rates.
  • Iterate your assessment design and policy language based on the data.
  • Share findings with program leads and student reps.

Bottom line

AI is now part of coursework and professional life. The goal isn't to ban the tool-it's to make sure grades reflect student learning. As one professor put it, the source of the solution isn't the issue; the absence of learning is.

Clear policy, process-first assessment, and targeted coaching will reduce misconduct and prepare students for responsible AI use in their careers.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide