AI-Generated Evidence Could Destabilize Criminal Trials, Legal Experts Warn
The 2020 video of Ahmaud Arbery's death provided crucial evidence in murder convictions. Today, six years later, similar recordings might not carry the same weight in court. Artificial intelligence tools that generate realistic audio, video, and text have fundamentally altered how the legal system must evaluate evidence.
Duncan Levin, a former federal prosecutor and white-collar defense attorney, argues that AI threatens to undermine core assumptions built into criminal law. "AI may change not only what people think about evidence, but what they think evidence is," Levin said.
Why Criminal Cases Face Unique Risk
Civil litigation and criminal prosecution handle evidence differently. In civil cases, disputed authenticity affects weight and credibility. In criminal cases, it strikes at something deeper: the constitutional requirement that the government prove guilt beyond a reasonable doubt.
Criminal trials often rest on a narrow set of key exhibits-a recording, a video, text messages, a geolocation trail. These anchors of narrative truth carry enormous weight with juries. AI introduces the possibility that the most concrete evidence may actually be the least secure.
The problem cuts both directions. Prosecutors may struggle to secure convictions in cases relying on digital proof. But defendants may also face skepticism when presenting authentic exculpatory evidence. The issue is not which side AI helps. It destabilizes the truth-finding function itself.
Beyond Deepfakes: The Broader Threat
Most people associate AI threats with deepfakes-synthetic audio or video. The actual danger is broader. Modern prosecutions depend on photographs, surveillance footage, text messages, emails, social media posts, screenshots, and digitally generated timelines. Each carries an implied claim: this happened, this was said, this reflects reality.
AI makes those claims easier to counterfeit and harder for laypeople to assess. The threat extends beyond wholesale fabrication. AI can alter, enhance, splice, or recontextualize evidence in subtle ways that preserve surface plausibility while changing meaning.
Even genuine evidence faces risk. Jurors who understand digital manipulation may wonder whether real exhibits are authentic. Evidence that once seemed solid may now seem contingent.
Authentication Doctrine Wasn't Built for This
Traditional authentication has never required certainty. A witness familiar with a speaker's voice could identify an audio recording. Distinctive characteristics and metadata could support admission. Courts assumed the principal risks involved ordinary tampering or misidentification.
That framework assumes the underlying exhibit connects to some real event or communication. AI attacks that distinction. A piece of evidence may look authentic, sound authentic, and fit coherently into surrounding facts while being entirely synthetic.
Voice identification exemplifies the problem. A witness may honestly recognize a speaker's voice. But that only establishes familiarity with the voiceprint-not whether the recording captures an actual conversation or an artificial simulation. The doctrine assumes voice recognition does more work than it actually can in an age of generative audio.
What "Beyond a Reasonable Doubt" Means Now
The reasonable doubt standard reflects a constitutional choice: tolerate some risk that guilty people go free to reduce the risk of wrongful conviction. That principle assumes jurors evaluate evidence in a world where doubt typically attaches to witnesses or interpretations of facts.
AI introduces a different layer of doubt. It raises the possibility that the exhibit itself-the recording, the image, the message-may not be what anyone claims. A juror may conclude a witness is sincere and still harbor reasonable doubt because the underlying exhibit seems potentially synthetic.
This distinction matters fundamentally. The question shifts from "Do I believe this witness?" to "Do I believe the ontology of this evidence at all?" That is a more foundational inquiry. AI may sharpen the reasonable doubt standard by making visible what was once implicit: proof requires epistemic confidence in the reality of the proof itself.
Who Bears the Burden of Proof?
The government bears the burden of proof in criminal cases. If digital evidence is central to prosecution, the government should establish authenticity with greater rigor than courts have traditionally demanded.
This does not mean every case requires expert testimony or a mini-trial on digital forensics. But courts should move away from accepting thin foundations when authenticity matters. Cases turning on audio, video, or disputed digital communications may require forensic examination, clearer provenance, stronger metadata analysis, or expert testimony sufficient to assure the court that exhibits have not been materially altered.
Defendants seeking to introduce digital evidence may also face higher standards. But the asymmetry remains constitutionally important. A defendant does not carry the burden of proving innocence. The constitutional burden stays where it belongs: on the state.
Over time, the law will likely combine doctrinal and technological solutions. Courts may become more exacting under existing authentication rules. Provenance tools, cryptographic signatures, or secure capture systems may emerge as more reliable ways of showing authenticity at the point of creation.
The Broader Institutional Challenge
The justice system depends on public confidence that trials reach truth through lawful procedures. If jurors begin assuming any recording may be fake, any image manufactured, and any digital communication spoofed, criminal trials risk becoming exercises in generalized skepticism rather than fact-finding.
Yet this moment may force necessary change. Judges, lawyers, jurors, and investigators may become more literate about the creation, preservation, and forensic evaluation of digital evidence. Courts may speak more candidly about what they do and do not know.
AI may deepen mistrust in the short term. But it may also push the criminal legal system toward more rigorous and transparent accounts of what it means to prove something. In a system built around the principle that liberty should not be taken without very high confidence, that is ultimately healthy pressure-even if uncomfortable.
Legal professionals should consider how AI literacy applies to evidence handling and authentication. Resources on AI for Legal professionals and the AI Learning Path for Paralegals address these emerging competencies.
Your membership also unlocks: