Law schools race to add AI to the curriculum as Harvey expands
Need to know: Harvey is rolling out its law school programme across the UK, partnering with Oxford's Faculty of Law, King's College London (KCL), BPP and the University of Law. More than 25 US law schools already use Harvey, including Stanford and the University of Chicago. UK legal education is responding, fast.
Students and faculty will get full access to Harvey, plus support integrating it into classes, workshops and research. This isn't a guest lecture. The goal is hands-on use across the degree so graduates arrive in practice with working AI habits.
Why this matters for firms
The old pipeline-theory at university, a little practice at law school, real learning during a training contract-is changing. Firms want trainees who can brief, review and draft with AI from day one, and who know where the red lines are.
Some firms are investing directly. Freshfields is sending future trainees to KCL's Law and Technology master's and covering tuition and living costs. That's a clear market signal.
KCL's 'world-first' AI Literacy Programme
KCL's Dickson Poon School of Law is launching an AI Literacy Programme in January. Every law student and staff member gets access to four tools-Harvey, Luminance, Legora and Lucio-alongside a structured 12-week online course and weekly practitioner workshops. It's led by executive dean professor Dan Hunter.
For the vendors, this is smart distribution. Win the next cohort of associates now, and you shape tool preferences later.
What they said
Professor Dan Hunter: "Artificial intelligence is no longer optional for the next generation of lawyers - it's fundamental. By giving every student and staff member hands-on access and structured guidance, we're ensuring that King's graduates will lead the legal profession's AI future."
Harvey's chief business officer John Haddock: "We will work closely with faculty to offer student workshops, curriculum embeds, and collaborative projects that explore AI's role in law and society."
Legora's head of legal engineering, Alex Fortescue-Webb: "By integrating our platform into clinical work, students will see first-hand how responsible AI can enhance real-world legal service and access to justice."
How the programmes will show up in class
- Live drafting labs: prompts to generate first drafts, followed by human edit passes with track changes.
- Research sprints: parallel queries across tools, then verify with primary sources and citation checks.
- Clinic work: triage, issue spotting and template selection with AI support, plus audit trails.
- Assessment redesign: mark on reasoning, verification steps and model limits-not just the final text.
What firms and in-house teams should do now
- Publish a short, plain-English AI use policy: what's allowed, what's banned, who approves exceptions.
- Stand up a secure sandbox: firm-licensed tools, logging on by default, client-segmented data.
- Create prompt libraries for common tasks (NDAs, engagement letters, hearing notes) with verification checklists.
- Train on source control: always link claims to primary law and client docs; require citations where possible.
- Define quality gates: second-lawyer review for AI-assisted work, and disclosure rules in client terms.
- Track outcomes: time saved, defects found, rework rates, and client feedback; use this to update playbooks.
Skills new grads will bring
- First-pass research memos with source mapping and limits of confidence.
- Clause comparison and fallback suggestions tied to playbooks.
- Issue spotting on intake docs and NDAs, with risk flags and questions for the client.
- Summaries of hearings, transcripts and long email threads with action lists.
- Document review triage for disclosure, privilege hints and inconsistency checks.
Risks to manage early
- Fidelity to sources: require verification and cite-back to primary materials.
- Confidentiality: block uploads of client data to consumer tools; use enterprise instances with retention controls.
- Attribution: be clear on who is accountable when AI is used in drafting or analysis.
- Bias and fairness: review outputs in sensitive matters (employment, housing, credit, immigration).
- Academic integrity and training: distinguish between acceptable assistance and outsourcing the thinking.
- IP and licensing: check use terms for models, plug-ins and training data.
What to watch next
Expect more UK law schools to follow with tool access, assessed modules, and vendor partnerships. Firms will start asking for AI proficiency in applications and assessment centres. Certification and shared benchmarks will emerge, likely around verification discipline and auditability.
For context on KCL, see the Dickson Poon School of Law's pages here. For governance and risk, the UK ICO's guidance on AI and data protection is a useful reference here.
Want structured training for your team?
If you need a fast, practical way to bring fee-earners and PSLs up to speed, explore curated AI courses by job role on Complete AI Training. Pick modules that mirror your matters, then bake the checklists into your playbooks.
Your membership also unlocks: