Federal Evidence Committee Blocks Deepfake Amendment as Courts Seek Guidance
The U.S. court system's advisory committee on evidence rules declined Thursday to advance a proposed amendment that would prevent AI-generated deepfakes from being admitted as trial evidence. The decision leaves judges without formal guidance on a problem they increasingly face.
The committee, which shapes the Federal Rules of Evidence, stopped short of endorsing the measure. No timeline exists for reconsidering the proposal.
Judges across the country have flagged deepfake evidence as an emerging problem. Courts lack clear standards for determining when synthetic media should be excluded or when its authenticity must be verified before admission.
The current evidence rules don't explicitly address AI-generated content. Lawyers and judges must work within existing frameworks designed for photographs, recordings, and documents-tools that predate modern synthesis technology.
Without federal rulemaking, individual judges make inconsistent decisions about deepfake admissibility. Some courts have begun requiring expert testimony to authenticate videos or images. Others have excluded synthetic media outright when its creation method raised doubt about accuracy.
The committee's hesitation reflects deeper disagreements about how to regulate deepfake evidence. Some members worry that rules written today could become obsolete as technology advances. Others argue that waiting for perfect clarity means leaving courts vulnerable to fabricated evidence in the meantime.
Legal professionals handling evidence disputes should expect this issue to remain unsettled at the federal level for now. State courts and individual judges will continue developing their own approaches.
Your membership also unlocks: