AI Advances into the Law School Curriculum
Artificial intelligence is reshaping legal work. The Law School is integrating it with care-teaching students how to use modern tools without skipping the core habits that define good lawyering: research, analysis, and judgment.
"We have spent a lot of time thinking about AI in the Law School," said William H. J. Hubbard, '00, deputy dean and the Harry N. Wyatt Professor of Law. "Our aim is to find the right balance between encouraging students to explore these new tools that could be very useful, and not short-cutting in ways that would be pedagogically unhelpful."
Large firms are already adopting AI for research, drafting, and review. The goal is clear: graduates should arrive with baseline fluency-including when not to use a tool and how to check its work.
Embracing the Technology-With Guardrails
The Law School added an orientation session, "AI and the Legal Profession," to set expectations from day one. Students hear what's allowed, what isn't, and how AI supports learning without replacing it.
- AI is a useful tool students will learn to use in law school-and in practice.
- It has limits and risks, including accuracy and confidentiality issues.
- AI doesn't change the core of lawyering: judgment, ethics, and ownership of work.
New upper-level electives push these ideas into practice: Advanced Legal Writing in the Age of AI; Regulation of AI: Legal and Constitutional Issues; Digital Lawyering: Advocacy in the Age of AI; and Generative AI and Legal Practice.
"We can't outsource expertise and knowledge to these AI models," said Mark Templeton, clinical professor and director of the Abrams Environmental Law Clinic. "These tools can generate what look to be beautiful pieces of writing, but when you look closely, there are so many errors because the tools don't understand technical terms sufficiently. When you use AI, there is a duty to supervise it like you would a junior attorney or paralegal. And to fulfill that duty, you have to be the expert yourself."
Baseline Literacy for All 1Ls
Starting in early 2026, all 1Ls will complete self-directed AI modules in their first quarter. Students who are new to AI can build confidence; those with experience can move faster into advanced tasks.
The modules will also steer students toward tools that fit legal work and protect client interests. This aligns with the profession's duty of competence on technology, as reflected in ABA Model Rule 1.1 (Comment 8).
Weaving AI into Coursework
The Bigelow Program now phases AI into 1L legal writing. No AI use in the fall-students learn the basics the hard way. In winter, AI is permitted under faculty guidance, with explicit instruction on when and how to use it.
This approach reinforces a core principle: build the skill, then add the tool. Students get a safe space to test AI and learn how to verify its output.
Different Courses, Different Policies
In his clinic, Templeton fully permits AI for research, drafting discovery, and portions of briefs-but requires disclosure when it's used. That transparency lets the class review outputs, compare them to primary sources, and debate quality.
- Allowed: research support, drafting outlines, first-pass language for non-substantive sections.
- Required: disclosure, source-checking, and attorney-level review before anything is filed or shared.
Other faculty tighten the rules. Joan Neal, a professor from practice who teaches transactional skills, prohibits AI in her upper-level contract drafting course. Students need to learn structure, precision, and judgment before leaning on a tool that can produce confident but wrong language.
Neal still addresses AI head-on-pros, cons, and ethics. In her ethics class, students may use AI for brainstorming or supplemental research, but not to write the body of their papers. They must disclose any use and remain responsible for every line they submit. Many are learning firsthand that AI often gives generic answers and struggles with the nuance of legal ethics rules.
Learning by Building: The AI Lab
Launched in fall 2025, the AI Lab is a hands-on workshop focused on creating tools, not just using them. Legal tech entrepreneur Kimball Dean Parker, '13-founder and CEO of SixFifty-teaches the class.
This year's project targets renters' rights. Students are compiling a nationwide database of carefully researched summaries of property rental laws and interviewing people to identify real questions that matter. The final product: a public-facing chatbot that delivers answers grounded in that specialized database.
The aim is practical impact: help people who can't easily access a lawyer while giving students experience that mirrors legal product work-scope, data quality, ethics, iteration, and deployment.
What This Means for Legal Teams
The message to employers and new lawyers is straightforward: AI belongs in the toolkit, but it requires supervision. The Law School's approach-firm fundamentals, controlled experimentation, and clear disclosure-maps well to law firm expectations.
- Expect baseline AI literacy in new hires, plus the ability to verify and cite properly.
- Insist on review standards that mirror junior-attorney supervision.
- Protect privilege and confidentiality in every tool choice and workflow.
As Dean Adam Chilton noted, graduates are hired for their judgment and command of the law-things that cannot be outsourced. Thoughtful adoption of AI helps them meet today's demands and lead tomorrow's practices.
If you're building internal training or upskilling paths for your team, you can browse curated AI course routes by job role here: Complete AI Training: Courses by Job.
Your membership also unlocks: