Federal Courts Grapple With AI Evidence as Maritime Law Panel Weighs Risks
Judges and lawyers are moving cautiously on artificial intelligence in courtrooms, rejecting some uses while accepting others - but establishing clear rules remains unsettled. A panel at the 42nd River and Marine Industry Seminar in New Orleans this month illustrated the tension between AI's practical utility and its unreliability.
Judge Kurt Engelhardt of the U.S. Court of Appeals for the Fifth Circuit said his chambers bans AI outright. "Law clerks are forbidden to use it in research," he said. He compared AI content to an anonymous tip or unattributed article - useful for pattern recognition but unreliable on what it said before.
Yet AI tools are spreading through legal practice. Thomson Reuters reports its AI-based research tool, Co-Counsel, has a million attorney users. Some states now require lawyers to use AI, one panelist said.
When Courts Allow AI Exhibits
Judges have admitted AI-created animations as evidence in specific cases. In one instance, a judge allowed a defense expert to use an AI animation showing where people stood during a shooting. The judge wore virtual reality goggles to view the recreation.
But courts are drawing lines. Several attorneys said they would accept AI video recreations only if creators submitted to cross-examination. Engelhardt suggested such exhibits could be treated as "demonstrative" rather than dispositive evidence - showing a scenario without proving it occurred.
One case illustrated the stakes. An attorney asked an AI chatbot to design a trial strategy with PowerPoint presentations based on photos of a defective stairway where a seaman was injured. The AI performed well but fabricated timestamps for some images.
Proposed Rules on Machine Evidence
The U.S. Judicial Conference's Advisory Committee has proposed Rule 707 to regulate machine-generated evidence introduced without expert testimony. The proposal has drawn extensive comment from judges and lawyers nationwide.
Engelhardt advocated a verification approach: humans do the work first, then use AI to check results. That method protects against the tool's core weakness - it can sound confident while being wrong.
AI's Limits in Legal Work
Lawyers emphasized that AI cannot replace human judgment. One case involved a ferry operator sued after losing a pony during a storm. An AI bot was asked to value the child's loss: "How much is a dead pony worth to a 9-year-old?" The question itself shows why algorithms fail at legal reasoning.
Engelhardt recalled settling a school accessibility case with a phone call - no lawsuit filed, no database record. "Lawyers are thinkers and problem-solvers. AI can't do that," he said.
Some clients prohibit AI use entirely. Others restrict it to closed systems like Microsoft Copilot, which cannot access the internet.
For legal professionals navigating these decisions, understanding AI's capabilities and limitations is essential. Learn more about AI for Legal applications and how to use these tools responsibly in practice.
Liability and Sanctions
Misuse of AI in legal work carries real consequences. Disbarment remains possible for attorneys who rely on fabricated case law or manipulate AI-generated evidence.
One attorney noted that AI chatbots sometimes refuse to answer questions. When asked about oil spill response organizations, one bot said it lacked liability insurance and declined to comment - a reminder that AI systems have their own constraints.
Your membership also unlocks: