AI and the Law, From a Student's Seat: Practical Lessons from Ja'Terra Scott, JD '26
AI won't replace your judgment. That was the clear throughline from a semester inside AI and the Law with Ja'Terra Scott, JD '26. She walked in with little hands-on exposure to AI and left with a grounded method for using it without losing the plot.
Her reason for signing up was direct: "All too often, the Black community is underrepresented in high-growth tech fields and are sometimes the first to face job displacement due to automation. I wanted to set myself up for success as I prepare for a legal career."
What changed her mind about AI
Ja'Terra expected a helpful tool. She discovered a moving target. "It seemed like every couple of weeks a new model emerged, with new features we could explore and more capabilities." That speed was exciting-and a warning signal for process, policy, and cite-checks.
She also learned the hard line between convenience and accuracy. "I used to think AI was the be-all and end-all, but very quickly you realize it's not. There was so much still needed, like verifying that all cited quotes were 100% accurate (which they almost never were), and making sure the inference it relied on was also valid."
Where LLMs actually help in legal work
- Breaking the blank page for memos, emails, or outlines.
- Framing issues and brainstorming angles you might overlook.
- Summarizing long materials to speed early triage.
- Pointing you toward potential sources when you're stuck.
"It made researching much easier-it could help point you in the right direction if you got stuck or needed help finishing a sentence." That's the right way to think about it: directional, not dispositive.
Where LLMs miss the mark
- Confident but incorrect citations and quotes.
- Generic reasoning that glosses over key facts and nuance.
- Shaky inferences if your prompt lacks a solid factual baseline.
- Overreliance by novices who haven't read the underlying sources.
Her take is blunt: "Some of the reasoning was extremely generic and missed the mark. That's why it's so important not to rely entirely on the LLM, because doing your own research is still crucial."
A simple, defensible workflow for attorneys and law students
- Define the question. Write the exact legal issue, jurisdiction, timeframe, and desired output length.
- Collect your base materials first. Facts, filings, and key authorities-don't ask the model to invent your record.
- Use AI for structure, not substance. Ask for issue lists, outline variations, and alternative theories.
- Demand transparency. Ask the model to separate "known from text provided" vs. "assumptions."
- Verify every authority. Shepardize/KeyCite, confirm quotes in the source, and cross-check dates and holdings.
- Cite last. Insert citations after you confirm them yourself, not from the draft AI proposes.
- Keep a prompt log. Record what you asked and what you used to preserve your reasoning trail.
- Protect confidentiality. Use approved tools and scrub client identifiers unless your policy allows it.
Standards and ethics still apply
Treat AI like any junior researcher whose work you must review. Competence and supervision still sit with you. For a quick refresher, see the ABA's tech competence guidance under Model Rule 1.1, Comment 8 here.
If you're building internal policies, the NIST AI Risk Management Framework is a practical place to start for risk, governance, and testing expectations. Find it here.
The role of teaching: why the "how it works" matters
Tools change. Principles don't. Ja'Terra credits the course-and Professor O'Reilly-for focusing on what AI is doing under the hood, not just tricks and prompts. "That's so pivotal to understanding the system you're using."
Once you see how large language models predict text, it's easier to spot weak reasoning, fill factual gaps, and tighten your prompts. Your skepticism gets sharper.
What she's using now-and how much she trusts it
Ja'Terra has already folded AI into her summer research methods. But with guardrails. "AI is really good at finding what you may need, but it just doesn't replace the secondary sources and due diligence of reading the material and having a solid background of knowledge. Without that, you're doomed to fail."
Her personal standard is clear: use it, but don't lean on it too hard. "It should be used carefully and relied on about 35% of the time."
Equity and access matter here
Her first motive is worth repeating: technology literacy protects careers. For communities that are underrepresented in tech and at higher risk of displacement, early familiarity with AI isn't a luxury-it's leverage for fair opportunity. Legal educators and employers should treat that as a concrete training goal.
Bottom line for legal professionals
- Use LLMs to move faster, not to skip reading.
- Tight prompts plus solid facts produce better drafts.
- Everything that matters-citations, quotes, holdings-gets verified by you.
- Keep an audit trail and follow firm policy.
- Invest in training that explains how these systems work, not just what buttons to press.
Want structured, job-focused AI upskilling?
If you're building a personal plan or firm-wide program, explore curated options by role here: Complete AI Training - Courses by Job. Keep it practical, policy-aligned, and focused on outcomes you can measure.
Your membership also unlocks: