Artificial Intelligence Risks Every Legal Professional Needs to Know

AI tools like ChatGPT and Lexis+ are increasingly used in legal work, boosting efficiency but posing malpractice and confidentiality risks. Lawyers must verify AI outputs, protect client data, and ensure proper oversight.

Categorized in: AI News Insurance Legal
Published on: Jul 15, 2025
Artificial Intelligence Risks Every Legal Professional Needs to Know

From Innovation to Exposure: Artificial Intelligence Risks for Legal Professionals

Generative artificial intelligence (AI) tools are becoming common in legal practice, bringing both opportunities and new risks. General-purpose models like ChatGPT, Claude, and Gemini, alongside specialized legal AI tools such as CoCounsel, Harvey, and Lexis+ AI, are increasingly used by attorneys. What was once an experimental technology is now edging toward becoming standard in legal workflows.

A recent American Bar Association survey shows that 45.3% of lawyers expect AI to be mainstream within three years, with 12.8% saying it already is. This suggests AI use may soon be part of the expected standard of care. Lawyers have a duty to stay informed about technology, including the risks it introduces. This article focuses on malpractice risks and insurance coverage related to AI use in legal practice.

Artificial Intelligence at Work in Legal Practice

Lawyers are applying AI in tasks such as drafting discovery requests, interrogatory responses, contracts, and pleadings. AI is also used to summarize transcripts and conduct research. While AI can increase speed and efficiency, it carries risks like undetected errors, overlooked facts, and misapplied law.

Emerging Malpractice Risks

AI itself doesn't cause malpractice; how lawyers use it does. Some key risks include:

  • Failure to verify AI output: In Mata v. Avianca (S.D.N.Y. 2023), an attorney relied on fabricated case citations generated by ChatGPT without verifying their accuracy, breaching the duty of competence. Courts are responding with rules requiring disclosure of AI use in filings. Many firms require junior lawyers to inform supervising attorneys when AI assists with work.
  • Confidentiality breaches: Uploading confidential client information into public AI tools without understanding data retention policies risks violating confidentiality rules like ABA Model Rule 1.6. This could lead to unauthorized data sharing or privacy law violations. Using AI tools that keep data local can help reduce this risk.
  • Delegation and oversight: Professional responsibility rules require supervising subordinate work, including AI-assisted tasks. Delegating AI use to nonlawyers is allowed but demands attorney oversight.
  • Unauthorized practice of law: Client-facing AI chatbots providing legal advice without attorney oversight risk constituting unauthorized practice, with potential claims against the firm or lawyer.

Managing AI Risks with Best Practices

  • Internal policies: Set clear guidelines on which AI tools are approved and for what purposes, regardless of confidentiality concerns.
  • Verification: Always confirm the accuracy and reliability of AI-generated outputs before relying on them.
  • Confidentiality protection: Use AI services with transparent data governance policies and ensure compliance with client confidentiality agreements.
  • Education and documentation: Train staff on AI risks and document when and how AI tools contribute to work products.
  • Disclosure: Consider disclosing AI use in filings and client documents where material, following applicable court rules.

Insurance Coverage Implications

As AI use grows, lawyers should pay attention to how it affects insurance coverage, especially professional liability policies.

  • Lawyers' professional liability (LPL) policies: Most LPL policies do not explicitly exclude AI use, but coverage depends on whether the AI-related conduct qualifies as a “professional service.” Failure to review AI-generated work could jeopardize coverage if insurers argue the work wasn’t legal services.
  • Common exclusions: Claims involving intentional misconduct (e.g., knowingly submitting false citations) may trigger intentional-acts exclusions. Breach of contractual obligations related to AI tools might also fall outside coverage due to contractual-liability exclusions. Some policies exclude technology-related failures.
  • Cyber insurance: Using client data in AI tools may lead to confidentiality breaches or data breaches involving personal or protected health information. Such incidents could trigger cyber coverage but may also face exclusions, including intentional-acts clauses.
  • “Silent AI” risk: Because AI risks are not explicitly included or excluded in many policies, there is uncertainty—sometimes called silent risk—that coverage may be lacking when needed or unintentionally provided, leading to disputes.
  • AI-specific endorsements and policies: Some insurers are beginning to offer AI-related exclusions or standalone AI coverage. Lawyers should review these carefully and work with brokers to identify and fill coverage gaps.

Conclusion

AI is becoming a routine part of legal practice, but it introduces new malpractice and insurance risks. Law firms and legal professionals need to implement policies, verify AI outputs, protect client confidentiality, and maintain proper oversight. Understanding how AI affects insurance coverage is equally important to safeguard the practice from unexpected exposure.

For those looking to deepen their knowledge of AI and its practical applications in legal and insurance sectors, exploring specialized training courses may be valuable. Check out resources like Complete AI Training for up-to-date AI courses tailored to professional needs.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide