AI Class Sparks Intern's Time-Saving NDA Tool at S&P Global
Ayo Bowman, a 3L at Rutgers Law School in Newark, walked into Professor David Kemp's "Generative AI Skills for Lawyers" course with zero AI experience. A few months later, during a summer internship at S&P Global, he built an AI prototype that streamlined NDA review and won the company's Interns Innovation Challenge.
The problem was familiar to any in-house team: high volumes of routine, low-risk NDAs consuming hours that could be spent on strategic matters. Bowman set out to relieve that bottleneck with a tool that could triage the simple work and surface the rest.
The Solution: An AI-Powered NDA Analyzer
What it does: the tool uses large language models to compare incoming NDAs against S&P Global's internal contracting policies, flagging clauses that trigger risk thresholds. It doesn't stop at summaries-it maps contract language to plain-English risk categories that non-lawyers can understand.
"The project is an AI-powered NDA analyzer designed to help in-house legal teams quickly review and flag risks in non-disclosure agreements," said Bowman. "At its core, it uses large language models to evaluate incoming NDAs against S&P Global's internal contracting policies - spotting red flags … It doesn't just summarize, it actually maps contract language to legal risk categories in plain English, which makes it more useful for cross-functional teams."
As the internship wrapped, the project was transitioned to stakeholders across the business, including c-suite leaders and an internal AI group, so the work could continue beyond the summer.
Why It Worked for Legal
- It targets a repeatable, high-volume document (NDAs) where risk is usually low and policy is clear.
- It operationalizes the playbook lawyers already use-policy checklists and risk taxonomies-then makes them machine-readable.
- It keeps lawyers in the loop: routine items move fast; anything off-policy escalates with context.
- It produces outputs non-lawyers can act on, shrinking back-and-forth and accelerating deal flow.
How Your Team Can Pilot Something Similar
- Pick one document type with volume and well-defined positions (e.g., NDAs, vendor forms). Codify your redlines and fallback clauses.
- Build a simple risk taxonomy (e.g., confidentiality scope, term, jurisdiction, assignment, IP, remedies) and define thresholds for escalate vs. approve.
- Use an LLM to compare clauses to policy and generate plain-language summaries with citations to the text it relied on.
- Route outputs into your existing intake and matter management workflow; log time saved and escalation rate.
- Address privacy and confidentiality from day one. Align with frameworks like the NIST AI Risk Management Framework.
From Classroom to Practice
Professor David Kemp's course gave students practical fluency with AI and a responsibility-first mindset-use the tools well, and help set better practices for the profession. That foundation helped Bowman spot a real business problem and prototype a credible solution.
Bowman also sees a bigger opportunity ahead: use AI to widen access, scale good ideas, and reimagine how legal services are delivered. His project proved that law students-and legal teams-can build useful systems, even without deep technical backgrounds.
Rutgers Law's Strategic Focus
Rutgers Law's Strategic Plan emphasizes strong student learning by aligning courses with career paths, bringing current industry issues into class, and sharing curricular innovation. This project is a concrete example of that approach paying off in practice.
Want to Build Similar Skills?
If you're exploring practical AI training for legal and in-house roles, see curated options by role at Complete AI Training. Start small, pick a clear use case, and iterate with your policies at the center.
Your membership also unlocks: