UCL Law School Cracks Down on AI Use with New Assessment Rules

UCL Law School is ensuring over half of its assessments cannot be completed with AI to maintain academic integrity. This shift focuses on testing skills AI cannot replace, preparing students for real legal practice.

Categorized in: AI News Legal
Published on: May 10, 2025
UCL Law School Cracks Down on AI Use with New Assessment Rules

UCL Law School Tightens AI-Proof Assessment Measures

University College London’s law school is taking a firm stance on artificial intelligence (AI) in legal education by ensuring that most of its assessments cannot be completed with AI assistance. This approach is aimed at preserving trust and integrity in its degrees amid concerns over what the school calls “AI slop” — an abundance of easily generated AI content lacking critical thought.

Why the Shift to AI-Proof Assessments?

The law school emphasizes two core reasons for this move. First, they want their degrees to remain powerful, internationally recognised markers of student achievement. Second, they aim to maintain assessments that genuinely test the skills and knowledge that AI cannot replace. According to UCL, a “secure assessment” is one where AI does not stand in for the student’s own skills or learning, covering both written and oral in-person exams.

While UCL already prohibits AI-generated content in coursework unless explicitly allowed for a valid reason, the law school now plans to ensure that over half of law assessments are AI-proof. This reflects a return to pre-pandemic assessment practices, moving away from the increased reliance on coursework that opened doors for AI assistance.

Preparing Students for Real-World Legal Practice

UCL points out that many legal careers will involve working in environments or jurisdictions at early stages of digital and AI adoption. Essential skills such as quick thinking during cross-examination and ethical handling of sensitive evidence cannot be replaced by AI tools. The school insists that these abilities must be developed through assessments that require direct student engagement without AI shortcuts.

Challenges from Advancing AI Tools

The rapid improvement of AI tools adds pressure on educators. For instance, AI chatbots have passed tests like the Watson Glaser in 2022 and contract exams in 2023. More advanced AI agents can autonomously perform tasks, and features like ChatGPT’s “deep research” allow for extended, high-quality essay-style outputs. This progress necessitates a reassessment of how legal education evaluates student capabilities.

Historical Parallels with Legal Databases

The paper draws parallels with past experiences when legal databases became cheaply accessible to universities. Those tools were intended to train students to rely on external resources, securing a user base. UCL law’s response to AI echoes earlier resistance efforts, like the establishment of the free BAILII case database. The concern is that AI companies are similarly encouraging dependency on their products rather than fostering independent skills.

Microsoft’s AI, Copilot, integrated into widely used office software, exemplifies this trend. The law firm Shoosmiths recently offered a £1 million bonus for lawyers submitting one million prompts through Copilot, illustrating its growing foothold in legal work.

Education Versus Professional Practice

The law school acknowledges that lawyers can and do use AI tools as content creators. However, students are in a different position; their degrees should assess foundational skills rather than the ability to produce content assisted by AI. With AI-generated text becoming abundant and “cheap,” students will need creativity and critical thinking to use these tools effectively.

Global Educational Responses and Calls for Action

UCL’s position is part of a broader international conversation. For example, Victoria University of Wellington in New Zealand has reintroduced handwritten exams this trimester to combat AI reliance.

UCL calls on universities to actively shape their AI strategies rather than passively adopting technology promoted by vendors. They stress the importance of universities steering or developing the technology necessary to fulfill their educational missions.

Legal Profession’s Mixed Signals on AI

The legal sector itself is in transition. This month saw the approval of Garfield.Law, England and Wales’ first AI-driven regulated law firm. Judges have received updated guidance on using AI tools like Copilot and on identifying AI-generated submissions. Meanwhile, the pupillage gateway has banned AI use in applications, whereas some law firms offer advice on incorporating AI tools.

This patchwork of policies from regulators, universities, and recruiters has created uncertainty for law students about acceptable AI use.

Conclusion

UCL law school’s move to AI-proof assessments underlines a commitment to preserving educational integrity and core legal skills. By limiting AI’s role in assessments, they aim to prepare students for authentic legal practice that demands critical thinking, ethical judgement, and real-time responsiveness—qualities AI cannot replicate.

For legal professionals and educators, these developments signal a need to rethink how AI fits into legal training and practice. Staying informed about AI’s role in law and education is essential as the sector adapts to these emerging challenges.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)