Inside Columbia Law's AI Task Force: Teaching, Research, and What's Next for the Profession

Columbia Law faculty show how AI enters teaching, research, and practice; a task force will update curricula. They stress AI fluency, evidence rules, and China's policy shifts.

Categorized in: AI News Education Legal
Published on: Jan 24, 2026
Inside Columbia Law's AI Task Force: Teaching, Research, and What's Next for the Profession

Faculty Experts on Generative AI, Legal Education, and the Future of the Profession

At a Lawyers, Community, and Impact event on November 10, Columbia Law School faculty shared how AI is entering their classrooms, shaping research, and changing legal practice. A new faculty task force-Talia Gillis, Benjamin L. Liebman, Eric Talley, and Rebecca Wexler-was formed to recommend curriculum updates so graduates are ready for a fast-moving profession.

Dean and Lucy G. Moses Professor of Law Daniel Abebe framed the mission clearly: law students need practical AI fluency that applies across firms, public interest organizations, and government. Efficiency, judgment, and responsible use are now table stakes.

AI in the Classroom

Talia Gillis, Professor of Law

Gillis uses AI as a teaching mirror. After class, she uploads notes, outlines student exchanges, and asks where explanations were clear-or where a discussion could have taken a better path.

This debrief loop helps her refine concepts, sequence ideas, and flag gaps. It's continuous feedback, not a one-off prompt.

Eric Talley, Marc and Eva Stern Professor of Law and Business

Talley taught a J-Term course on machine learning and the law that pulls back the curtain on how legal AI works. Students explored what happens behind the scenes of chatbots and large language models.

His goal: build lawyerly intuition to question AI outputs. If an answer feels off, students should know how to interrogate it-and decide when to trust, test, or toss it.

Benjamin L. Liebman, Robert L. Lieff Professor of Law; Vice Dean for Intellectual Life

Liebman notes that China is moving fast on AI adoption and regulation. In his spring 2026 course, Law and Legal Institutions in China, he will examine how Chinese policy choices may influence legal systems beyond its borders.

For legal educators, this is a signal to treat foreign AI regulation as a core part of comparative law and policy study, not a side note.

Rebecca Wexler, Alfred W. Bressler Professor of Law

Wexler has integrated AI into Evidence. Many rules-hearsay, reliability standards for experts, and the Sixth Amendment's confrontation clause-were built for human witnesses, not machine-generated outputs.

This gap sets up timely debates about proposals to adjust the Federal Rules of Evidence for AI. For context on the confrontation clause, see the Sixth Amendment text at the Legal Information Institute: Cornell LII.

AI and Scholarship

Gillis treats AI like a 24/7 research collaborator. She tests clarity by asking it to restate arguments, critique structure, and surface blind spots-then iterates.

Liebman's work on the Chinese legal system draws on a dataset of 130 million court judgments. His team is experimenting with using ChatGPT to probe the data faster than legacy methods.

Wexler is studying how AI built into discovery tools could bias results and suppress exculpatory evidence, raising due process concerns. With a computer scientist, she's running simulations on synthesized datasets to validate the risk.

AI in Practice

Talley told students the field is wide open. Opportunities will emerge in areas like tax, property, employment, and environmental law-wherever AI creates new risks, duties, and workflows. Being early in a niche will matter.

Gillis has been meeting with Big Law partners. The consensus: AI augments lawyers rather than replaces them-especially junior associates who know how to use these tools well. Curiosity, tool fluency, and the ability to collaborate with technologists and data scientists are now fundamental skills.

Practical Takeaways for Legal Educators and Employers

  • Integrate AI literacy across core courses. Treat prompt quality, verification, and failure modes as essential skills.
  • Teach "trust but verify." Build exercises that require students to audit and improve AI outputs.
  • Update Evidence and Criminal Procedure modules. Address machine-generated evidence, provenance, and reliability frameworks.
  • Prioritize data governance. Establish policies for confidential data, retention, model use, and audit logs.
  • Build cross-functional fluency. Encourage work with engineers, data scientists, and policy teams on real problems.
  • Track international policy. China's regulatory choices may influence global practice and compliance strategy.

Where to Skill Up

If you're building an AI learning path for your law school or firm, a curated catalog can speed up selection and rollout. See role-based options here: AI courses by job.

About Lawyers, Community, and Impact

Launched in 2016, this series brings Columbia Law experts together to address current issues and add context that supports work inside and outside the classroom.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide