Lawyers Have Always Been the Ones to Ask the Hard Questions
Lawyers are naturally inclined to ask tough questions, and this moment demands exactly that. Don’t let the label “AI-powered” end the discussion. Treat it as the starting point for proper due diligence.
Last year, the SEC fined two investment advisors $400,000 for making misleading claims about their use of artificial intelligence. While the fine wasn’t headline-grabbing, the message was clear: overstating AI capabilities carries legal risks.
Language around AI often gets ahead of the actual technology. Terms like “intelligent,” “autonomous,” or “human-level” are frequently used without explanation—or sometimes without any real product behind them. This exaggeration is what regulators call AI washing. For legal, compliance, and other fields where accuracy is critical, AI washing is not just a tech issue; it’s a risk management problem.
What Is “AI Washing,” and Why Does It Matter?
AI washing is similar to greenwashing but applied to artificial intelligence. It involves inflated or false claims about AI use in products or services. While in some industries this might be mere marketing hype, in law, finance, healthcare, and defense—areas where trust and accountability are essential—the consequences are much more serious.
In 2024, securities class action lawsuits related to AI misstatements doubled compared to the previous year. This increase reflects growing scrutiny from both regulators and investors on companies’ AI claims.
When firms claim their tools are “AI-powered” without transparency or evidence, they expose clients, investors, and users to risk. Decision-makers may trust systems they don’t fully understand or assume capabilities that don’t exist. In regulated industries, this can lead to liability and erode trust.
A Call for Clarity and Accountability
AI systems, especially those based on machine learning, can be powerful and useful. A 2024 Thomson Reuters survey found that 63% of lawyers have used AI tools in their practice, with 12% using them regularly. These tools mainly assist with summarizing case law and drafting documents.
However, AI tools are not magic. They depend on large, often sensitive datasets and can contain biases or errors. They perform best within clearly defined limits.
For legal professionals evaluating AI or advising clients, it’s critical to ask:
- What exactly does the system do?
- How is it trained?
- What are its failure rates?
- Can its outputs be independently verified?
For example, if a platform claims to use AI to redact sensitive information, decision-makers need to understand the difference between simple pattern-matching and advanced machine learning—and the risks each approach brings. A tool that works well on standard templates may fail with unstructured or multilingual documents. Without this clarity, users risk exposing sensitive data.
Being upfront about what an AI system cannot do is as important as explaining what it can. This transparency builds credibility and helps users stay informed, identify risks, and maintain quality control.
Building Trust in an Oversaturated Market
The market often rewards flashy claims over real performance. But lasting value—especially in legal—comes from tools that are explainable, auditable, and precise.
Honest conversations about what AI can truly do are essential. AI doesn’t need to be labeled “revolutionary” to be valuable. Often, solving a specific, high-stakes problem consistently is more effective than attempting to automate broad human decisions.
Legal professionals understand this instinctively: precision matters more than hype.
The Legal Industry Can Set the Tone
Lawyers have always been the ones to ask the hard questions, and now is the time to harness that skill. Don’t accept “AI-powered” as the final word. Use it as the beginning of thorough due diligence.
Innovation should continue—but with clarity. If a tool uses machine learning, it must come with clear documentation, explainability, and a defined scope of what it can and cannot do.
For legal professionals interested in expanding their knowledge on AI tools and their applications in law, resources like Complete AI Training’s courses for legal professionals offer practical, focused education.
Your membership also unlocks: