AI Is Outpacing the Law: Lee Tiedrich on Why Legal Systems Must Evolve Before It's Too Late
AI isn't waiting for the law to catch up. Lee Tiedrich - Distinguished Faculty Fellow at Duke University and former partner at Covington & Burling LLP - has advised global corporations, governments, and non-profits for over three decades. Her work with the OECD and the Global Partnership on AI puts her at the center of responsible innovation and digital ethics.
Her message to legal leaders is blunt: the rulebook is being rewritten in real time. Enforcement is rising, new requirements are landing, and procurement and standards are shaping obligations outside traditional legislation.
What legal teams should be preparing for now
Tiedrich sees three fronts moving fast: new AI requirements in key jurisdictions, a rise in enforcement, and policy shifts through procurement and standards. The result: higher legal exposure across IP, discrimination, privacy, and consumer protection - and a premium on trust.
Translation for in-house and law firm teams: treat AI risk like a continuous program, not a one-off review. Build governance by design and keep it active for the whole lifecycle.
Practical governance that actually works
- Form a multidisciplinary team (legal, product, engineering, security, risk, sustainability) and empower it to say "stop."
- Run pre-deployment assessments: purpose, benefits, foreseeable harms, mitigation steps, and escalation paths.
- Track through the lifecycle: design, development, deployment, monitoring, and retirement. Don't skip post-release audits.
- Document decisions, data sources, and model changes. If you can't explain it, you can't defend it.
- Build sustainability into design choices - energy use, model size, and vendor selection are legal and reputational concerns.
A process for integrating new AI into products and strategy
- Scope the use case and risk tier. Tie controls to impact (e.g., safety, employment, credit, healthcare).
- Run an IP clearance and rights mapping before build or buy. Assume training data and outputs can trigger claims.
- Stand up data governance: lawful basis, data minimization, provenance tracking, and retention aligned with purpose.
- Define human-in-the-loop for high-stakes decisions. Codify when humans review, override, or shut down.
- Plan for incident response: model drift, bias findings, and regulator inquiries.
AI and intellectual property: where the ground is shifting
AI is generating valuable outputs - from compound discovery to creative content - while many jurisdictions still lack clear answers on protection, authorship thresholds, and ownership across complex value chains.
While laws evolve, contracts are the most reliable tool to reduce uncertainty. Tiedrich's work through the Global Partnership on AI points to standard contract terms as a way to bring predictability and lower transaction costs.
Contract terms worth standardizing
- Ownership and licensing: who owns models, data, and outputs; license scope; derivative works; moral rights where relevant.
- Human contribution: define the level of human input needed for protectable works and who supplies it.
- Training data rights: representations on data provenance, scraping practices, and lawful use; opt-outs and revocation.
- IP warranties and indemnities: coverage for patent, copyright, trademark, and trade secret claims, including training-data disputes.
- Attribution and confidentiality: source disclosures where required; strict handling of trade secrets.
- Testing and audit: bias, safety, and performance testing rights; audit trails; model and data access boundaries.
- Compliance with evolving laws and standards: automatic updates and renegotiation triggers.
Data scraping: the litigation magnet
Scraping public websites and social platforms to train models has sparked lawsuits and enforcement actions across IP, privacy, and consumer protection. Expect more, not less.
- Lock down data acquisition: licenses where possible; documented provenance; respect for technical controls and terms.
- Set supplier obligations: no unauthorized scraping; pass-through rights; flow-down of complaints and takedowns.
- Maintain a kill switch: the ability to remove datasets or retrain on demand if claims arise.
Procurement and standards are shaping the rules
Public and private buyers are baking AI assurances into contracts, effectively setting de facto legal requirements. Standards are doing the same. Legal teams should track both with the same rigor as statutes and case law.
- Map your obligations to major frameworks and upcoming rules in your markets.
- Build controls that satisfy audits by design - not as a last-minute memo.
Helpful references
A quick checklist for legal leaders
- Inventory AI use cases and vendors; classify risk and assign owners.
- Stand up AI policies with approval gates and audit trails.
- Refresh IP strategy for AI-generated outputs and training data.
- Standardize contract terms for AI deals; build a clause library.
- Run bias, safety, and privacy reviews before and after deployment.
- Train product, procurement, and engineering on legal guardrails.
Tiedrich's core point is simple: trusted AI isn't just safer - it sells. The teams that treat ethics, IP, and governance as product features will move faster with fewer surprises.
If your team needs structured upskilling on AI practices and tools, explore concise programs at Complete AI Training - courses by job.
Your membership also unlocks: