AI persuasion now beats political ads: what the latest research means for elections
New evidence from major studies points to a clear risk: AI systems can sway voters and distort public opinion data at scale. In multiple experiments, chatbots outperformed traditional TV and video ads at shifting candidate support-and sometimes did it with inaccurate claims.
Two papers released on the same day-one in Nature and one in Science-show that persuasion is strongest when chatbots deliver dense, factual arguments, even though pushing for more "facts" increases hallucinations. A separate study in PNAS shows AI can pass survey bot-detection at near-perfect rates, opening the door to poll manipulation.
What the studies tested
In the Nature study, 2,306 U.S. participants chatted with AI models in late August and early September 2024. The models were instructed to boost support for an assigned candidate (Harris or Trump) and to either increase turnout among supporters or reduce turnout among opponents.
The pro-Harris model shifted likely Trump voters by 3.9 points toward Harris-about four times the average impact of TV ads measured in 2016 and 2020. The pro-Trump model moved likely Harris voters by 1.51 points toward Trump.
Parallel experiments ran with 1,530 Canadians and 2,118 Poles ahead of their 2025 elections. There, the bots advocating for Mark Carney or Pierre Poilievre in Canada, and RafaΕ Trzaskowski or Karol Nawrocki in Poland, shifted preferences by up to 10 percentage points-roughly triple the U.S. effects.
Why the cross-country gap might exist
Participants in the U.S. are saturated with campaign coverage over long cycles, which likely reduces susceptibility. As one researcher put it, the more arguments and evidence you've already heard, the less you respond to new ones. Shorter campaigns and lower information volume in Canada and Poland likely amplified the effect sizes.
How the bots persuade-and where they fail
Across both papers, the most effective tactic was straightforward: present as many factual arguments as possible. But there's a catch. When pushed for more facts, models start accurate, then exhaust reliable material and begin to invent details.
Accuracy wasn't symmetrical either. Bots advocating right-leaning candidates made more inaccurate claims across all three countries. Given that LLMs are trained on web data, the researchers argue the models likely reflect known online patterns where right-leaning users and elites share more inaccurate information.
Survey data is newly exposed
In the PNAS study, an AI agent passed automated survey-bot checks 99.8% of the time across 6,000 attempts. It could be instructed to corrupt polls, creating an obvious vector for information warfare. That makes many current detection methods obsolete and threatens unsupervised online research.
Implications for scientists, pollsters, and platforms
- Audit persuasion pipelines: log prompts, responses, and outcomes; measure hallucination rates as factual density increases; implement retrieval-based grounding to constrain claims.
- Limit factual overreach: cap response length or require citations from verified sources; degrade gracefully when evidence runs thin rather than "filling in."
- Label political interactions: disclose when chatbots are used for canvassing or persuasion; include provenance signals and easy reporting channels.
- Institute campaign transparency: update finance rules to declare spend on AI canvassers, data sources, and prompt templates; store public, time-stamped archives.
- Strengthen survey defenses: combine liveness checks, behavioral telemetry, honeypot items, and adversarial bot farms in validation; rotate item banks and formats.
- Validate offline: replicate key survey findings with phone, mail, or in-person samples; use cross-mode consistency checks and adversarial stress tests.
- Reweight and reverify: apply model-based reweighting for suspected bot contamination; add post-stratification checks and manual panel audits.
- Monitor asymmetry: track differential error rates across political directions and topics; adjust filters and thresholds based on observed drift.
- Prepare incident response: define triggers for throttling, content freezes, or increased human review during sensitive windows.
- Invest in staff training: upskill teams on prompt design, evaluation, and red-teaming so they can spot failure modes early. For structured learning, see our primer on prompt engineering.
What to watch next
Expect more capable agentic systems that can personalize arguments in real time, increasing both persuasion and the risk of subtle errors. Also watch the arms race between survey defenses and synthetic respondents, including multi-agent coordination to evade checks.
Regulatory clarity around disclosure, data provenance, and auditability will matter more than model bans. The practical wins are in measurement, transparency, and containment, not wishful thinking about turning the tech off.
Bottom line
AI can move votes and distort polls-sometimes more efficiently than legacy methods. The fix isn't hand-wringing; it's instrumentation, transparency, and disciplined guardrails that limit errors when models are pushed for "more facts."
Treat every political interaction with an AI system as strategic communication. Be clear about who built it, what it's optimizing for, and how you'll verify its claims before they influence real decisions.
Your membership also unlocks: