The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
How Deepfakes Are Created
Generative AI models produce highly realistic fake media by training on real images, videos, or audio of a target individual. The main AI architectures involved are generative adversarial networks (GANs) and autoencoders. GANs feature a generator creating synthetic images and a discriminator trying to tell fakes from real, improving through competition. Autoencoders encode a target face and decode it onto a source video.
Creators often use accessible open-source tools like DeepFaceLab and FaceSwap, which dominate video face-swapping—DeepFaceLab alone accounts for over 95% of known deepfake videos. Voice-cloning tools mimic speech from minutes of audio, and commercial platforms such as Synthesia produce text-to-video avatars that have been misused in disinformation. Even mobile apps like FaceApp and Zao allow basic face swaps within minutes. Advances in these AI models have made deepfakes cheaper and easier to produce than before.
Deepfake algorithms train on large, varied datasets to enhance realism. Post-processing steps such as color adjustments and lip-syncing refinement further improve believability. Technical defenses focus on detection—spotting inconsistencies like blinking or audio artifacts—and authentication through embedded markers like invisible watermarks or cryptographically signed metadata. The upcoming EU AI Act mandates major AI providers to embed machine-readable watermarks in synthetic media. Yet, detection remains an ongoing challenge, as sophisticated deepfakes can evade identification, and labels alone do not prevent misinformation spread.
Deepfakes in Recent Elections: Examples
Deepfakes have influenced elections worldwide. During the 2024 U.S. primaries, a robocall mimicked President Biden’s voice urging Democrats not to vote in New Hampshire. The caller was fined $6 million and indicted under telemarketing laws, which applied regardless of AI use. Former President Trump posted AI-generated images suggesting pop singer Taylor Swift endorsed him, sparking media uproar. Elon Musk’s X platform hosted AI-generated clips, including a parody ad featuring an AI clone of Vice-President Harris’s voice.
Internationally, Indonesia saw a deepfake video of the late President Suharto endorsing a candidate who later won the presidency. In Bangladesh, a viral deepfake superimposed an opposition leader’s face on a bikini-clad body to discredit her in a conservative society. Moldova’s President Maia Sandu was targeted by a deepfake showing a false resignation and endorsement of a Russian-friendly party. Taiwan faced synthetic videos of U.S. politicians making foreign-policy statements ahead of elections. Slovakia experienced AI-generated audio accusing a liberal leader of plotting vote-rigging and beer price hikes just before elections.
These cases illustrate how deepfakes aim to undermine candidates or confuse voters across diverse political contexts. Most viral “deepfakes” were shared openly as memes or obvious fabrications, rather than subtle deceptions. True undetectable AI deepfakes remain rare; instead, many falsehoods are cheaply doctored “cheapfakes” or AI-generated memes circulated by partisans. Even these unsophisticated fakes can sway opinion. Studies show false presidential ads influenced voter attitudes in swing states. This growing trend demands serious attention from voters and regulators alike.
U.S. Legal Framework and Accountability
The United States currently lacks a comprehensive federal law specifically addressing deepfakes in election misinformation. Existing statutes cover impersonation of officials, electioneering rules requiring disclaimers, and criminal electioneering communications. For example, the New Hampshire robocall case leveraged the Telephone Consumer Protection Act and telemarketing fraud laws. Voice impersonation could breach false advertising or unlawful corporate communication laws, but these were not created with AI in mind and often do not fit neatly.
Deceptive deepfake claims without clear individual victims fall outside defamation or privacy torts. Laws against voter intimidation usually target threats or coercion, leaving gaps for false information on voting logistics or endorsements. Courts and agencies have occasionally applied broad fraud statutes or interference with voting rights laws. The Department of Justice has charged individuals under fraud laws for attempts to manipulate votes.
The Federal Election Commission (FEC) issued an advisory opinion in April 2024 limiting use of falsified media in non-candidate electioneering communications, which could outlaw paid political ads using manipulated images or audio of candidates. The Federal Trade Commission (FTC) and DOJ have also indicated potential liability for commercial deepfakes that impersonate voters en masse or foreign-funded electioneering.
U.S. Legislation and Proposals
New federal proposals aim to fill legal gaps. The DEEPFAKES Accountability Act (H.R.5586) would require political ads with manipulated media to carry clear disclaimers and increase penalties for false election videos or audio. Supporters argue this creates uniform rules for campaigns at all levels. The Brennan Center advocates for transparency requirements targeting deceptive deepfakes in paid ads, while protecting parody and news coverage.
At the state level, over 20 states have enacted deepfake laws for elections. Florida and California forbid distributing falsified media intending to deceive voters, though Florida exempts parody. Texas allows candidates to sue or revoke candidacies over deepfake violations. However, courts have struck down overly broad laws, and First Amendment concerns remain significant. Laws restricting political speech must be narrowly crafted. Texas and Virginia statutes face legal challenges, and major platforms have sued under California’s law as unconstitutional. So far, most litigation relates to defamation or intellectual property rather than election laws.
Policy Recommendations: Balancing Integrity and Speech
Experts recommend a multi-faceted approach emphasizing transparency and disclosure. Clear labels or digital watermarks on AI-synthesized political content help campaigns and platforms own their use of AI and alert audiences to treat such content skeptically. Outright bans on all deepfakes risk violating free speech, but narrowly targeted restrictions on harmful uses—such as automated calls impersonating voters or false claims about voting—may be justified. Florida’s voter suppression penalties for recording misuse offer a relevant example.
Liability should focus on intent to mislead rather than mere content creation. Both U.S. proposals and EU law condition penalties on demonstrable deception. Technical measures like watermarking and open-source detection tools, supported by government research, can assist fact-checkers and social platforms. Publicly available datasets improve AI’s ability to spot fakes. International cooperation on information sharing and rapid response is critical, with groups like the G7 and APEC committing to combat election interference via AI.
Ultimately, voter education and a strong independent press remain crucial defenses against deepfake-driven misinformation. Legal penalties help deter the worst actors, but resilience depends on an informed public able to question sensational media. As one expert noted, the key question is who will wield the first effective deepfake in elections. Policies must deter malicious use without restricting innovation or satire.
Your membership also unlocks: