Court Allows AI-Generated Victim to Testify at Sentencing. Legal Questions Follow.
A court admitted an AI-generated video of a murdered person speaking at their own sentencing in 2025. The victim did not testify. The statement was not subject to cross-examination. Family members created the video, and the court treated it as evidence of victim impact. The case is now on appeal, meaning this practice remains legally unsettled.
The precedent matters because it opens a path for similar use in future cases. What happens next in appellate courts will shape whether this becomes routine in American sentencing.
How AI Changes Victim Impact Statements
Victim impact statements have been part of sentencing since the 1991 Supreme Court decision in Payne v. Tennessee. Family members describe their loss and how a crime affected them. A judge hears their words and considers that information before sentencing.
AI-generated statements work differently. Instead of a family member describing grief, the victim appears to speak for themselves. The voice, tone, and demeanor are constructed. The victim is, in effect, optimized for emotional impact.
That shift from description to simulation is fundamental. A human statement conveys loss. An AI reconstruction creates the illusion that the person has returned to address the court.
The Racial Dimension
The American criminal system has never treated all victims equally. Research shows that crimes involving white victims result in more prosecutions and harsher sentences. Black and Brown victims receive less attention in courtrooms and public narratives.
Race operates through perception. Courts respond to how people are seen-through skin tone, facial features, names, and accents. Studies show that defendants perceived as more non-white receive harsher punishment, even when controlling for other factors.
AI-generated victim statements create an asymmetry. On one side stands a reconstructed victim whose humanity has been shaped and amplified. On the other stands a living defendant filtered through racialized perception and unfiltered bias.
The question is whether judges would respond the same way if the racial identities were reversed. No individual case proves bias. But decades of research show how the system responds: empathy is not distributed evenly. It follows patterns. Those patterns are racial.
Why AI Amplifies Rather Than Corrects
AI does not enter the legal system as a corrective to bias. It enters as an amplifier.
Families with more resources will produce higher-quality simulations. Victims who fit dominant narratives of innocence will be more easily humanized through AI. Cases involving those victims will carry greater emotional weight in sentencing.
Defense attorneys cannot cross-examine a simulation. They cannot interrogate a constructed voice or disentangle authentic grief from engineered narrative. What enters the courtroom is an uncontestable emotional artifact.
The result is a new layer of inequality-one harder to see because it operates through emotion rather than explicit rules.
The Constitutional Problem
Due process requires fairness in sentencing. At some point, emotional influence becomes undue prejudice. But courts have struggled to draw that line, especially when emotion is framed as legitimate expression of harm.
AI complicates that struggle. Emotional expression becomes more immersive and more persuasive. It becomes harder to challenge.
The appellate courts deciding this case will set the trajectory for future norms. Early decisions in moments like this tend to become foundations for what comes next.
What's at Stake
In a system where punishment is already shaped by race, introducing technology that amplifies perception is not neutral. It is a choice with consequences.
Technology does not operate outside bias. It operates through bias-more efficiently, more persuasively, and with the added authority that technology carries.
If courts fail to confront that reality now, they will not eliminate racial disparities in sentencing. They will encode them.
For legal professionals, understanding how AI interacts with bias in courtrooms is essential. Consider exploring AI for Legal professionals or the AI Learning Path for Paralegals to deepen your knowledge of how AI tools operate within legal systems.
Key Cases and Resources
- Payne v. Tennessee, 501 U.S. 808 (1991) - Established victim impact statements in sentencing
- McCleskey v. Kemp, 481 U.S. 279 (1987) - Addresses racial disparities in capital sentencing
- Death Penalty Information Center: Race and the Death Penalty - Data on sentencing disparities
- ProPublica: Machine Bias (2016) - Examines algorithmic bias in criminal sentencing tools
Your membership also unlocks: