K&L Gates Achieves ISO 42001 for AI, Turning Compliance into a Competitive Edge

K&L Gates earned ISO/IEC 42001 for its AI program, signaling AI's shift from pilots to daily legal work. It's a new bar for governance-accountability, risk, ethics.

Categorized in: AI News Legal
Published on: Mar 14, 2026
K&L Gates Achieves ISO 42001 for AI, Turning Compliance into a Competitive Edge

K&L Gates Earns ISO/IEC 42001 AI Certification: What It Means for Legal Teams

Top 50 US firm K&L Gates has secured ISO/IEC 42001:2023 certification for its artificial intelligence management system. That's a clear signal: AI is moving from experiments to everyday legal work-research, drafting, contract review-and clients want proof the risks are handled.

Following an independent audit, the certification covers accountability, risk management, ethics, transparency, data protection and regulatory compliance. For in-house teams and law firm leaders, this is a new benchmark for how AI should be governed.

Why it matters

"We pursued ISO 42001 because AI is becoming part of everyday legal work, and we believe it's important to manage that responsibly, rather than reactively," said K&L Gates chief technology officer, Harpreet Suri. She added: "It's actually about governance, accountability and trust… Clients, regulators and courts are all asking… similar questions, what AI are you using? How is risk managed, who is accountable if something goes wrong? And ISO 42001 gives us a clear, independent way to answer those questions."

What ISO 42001 expects

  • Documented accountability for AI decisions and outcomes.
  • Risk assessment and controls across the AI lifecycle.
  • Clear ethics and transparency practices (including disclosure to clients).
  • Data protection aligned with applicable laws and firm policies.
  • Ongoing monitoring, incident handling and compliance reporting.

For a broader view of AI standardization, see the ISO/IEC AI committee overview here.

How K&L Gates is using AI today

  • Tools: Legora, Vincent, Westlaw Precision AI, Microsoft 365 Copilot.
  • Use cases: legal research, drafting, contract review, due diligence, discovery.
  • Timeline: began work toward certification around May last year; created a cross-disciplinary "AI solutions" group in 2023.

Governance and training at scale

The firm's AI policy mandates approved platforms, transparency with clients, verification of AI outputs and mandatory training. It uses ltaClaro and Hotshot Legal to train lawyers and allied professionals in AI literacy and prompt engineering.

"Before we give any AI tools into the hands of our lawyers or allied professionals, there's mandatory training requirements, and only then the tool is given to them," Suri said. "Then on a regular basis, we have… what we call our continued learning programme, where we talk about new aspects of AI."

For practical resources on legal AI adoption and policy, see AI for Legal. For executive-level governance and risk topics, see AI for Executives & Strategy.

The skeptic's view

AI advisor and law school lecturer Josh Kubicki applauded the move but cautioned: "Getting certified is a one-time exercise and is about management processes, not technical assurances." He noted the certification does not audit whether specific tools are fit for purpose.

Still, he expects market pressure to grow: "K&L Gates just made it a competitive checkbox. Once one major firm has it, procurement departments and RFP processes will start asking 'do you have ISO 42001?'"

What clients and RFPs will ask next

  • Which AI systems are in use, and for which matters.
  • How risk is identified, mitigated and escalated.
  • Who is accountable for AI-assisted work product and errors.
  • How client data is protected and where it flows (storage, vendors, access).
  • How outputs are verified and documented in the file.
  • Training requirements for timekeepers and staff.

Practical steps for your firm

  • Map your current AI use (tools, data sources, use cases, owners). Close shadow IT.
  • Stand up a cross-functional AI governance group (legal, IT, risk, privacy, KM, training).
  • Adopt written policies: approved tools, disclosure rules, human-in-the-loop review, incident handling.
  • Implement training by role (AI literacy for all; prompt practices and verification for users; vendor/risk review for IT and legal ops).
  • Establish an audit trail: prompts, outputs, validation notes, and client communications.
  • Vendor management: contractually address data use, security, model updates and logs.
  • Pilot, measure, then scale: start with research and contract workflows where review is standardized.

ROI: what leadership should track

Global managing partner Stacy Ackermann said AI is part of "every single strategic priority" for the next two years-spanning internal processes, business development and legal work. She declined to share spend, but noted the value shows up in efficiency and winning or retaining clients.

  • Cycle time: research memos, first-draft contracts, diligence summaries.
  • Quality and consistency: fewer reworks, clearer work product standards.
  • Staff leverage: hours shifted from routine tasks to higher-value work.
  • Commercial impact: RFP wins, client retention, new service lines.

Bottom line

ISO/IEC 42001 won't certify that your tools are "fit for purpose." It will prove you run AI with discipline-policy, controls, and accountability-at the firm level.

As more clients add this to their checklists, the question isn't whether to formalize AI governance; it's how fast you can stand it up without slowing matters or piling on admin. Get the policy, training and verification workflow in place now. The work-and the RFPs-aren't waiting.

If you want a complementary framework to strengthen your program, review NIST's AI Risk Management Framework here.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)