AI chatbots are winning political arguments by flooding people with information - much of it wrong

A large UK study says AI chatbots can shift political views; long, info-heavy replies work best and effects linger. The trade-off: about 1 in 5 claims were mostly wrong.

Categorized in: AI News Science and Research
Published on: Dec 05, 2025
AI chatbots are winning political arguments by flooding people with information - much of it wrong

AI chatbots can shift political views - and the most persuasive ones get more facts wrong

A large-scale study published in Science reports that conversational AI can move people's political opinions - and the strategies that persuade the most tend to be the least accurate.

Researchers recruited nearly 77,000 adults in the U.K. and paid them to interact with different chatbots built on models from OpenAI, Meta, and xAI. Participants stated their views on issues like taxes and immigration; then a chatbot attempted to nudge them toward the opposing side.

The result: Chatbots often succeeded, especially when they flooded users with detailed information instead of moral appeals or personalized arguments.

Key findings

  • Conversational AI was 41%-52% more persuasive than a static, 200-word AI-written message, depending on the model.
  • The effect stuck: 36%-42% of the opinion change was still measurable one month later.
  • Volume won: Long, information-dense responses beat moral framing and personalization.
  • Accuracy suffered: About 19% of chatbot claims were rated "predominantly inaccurate."
  • Newer, larger models produced less accurate persuasive claims. The paper notes that GPT-4.5's claims were less accurate on average than smaller, older OpenAI models. The study concluded before OpenAI released GPT-5.1.

What the researchers say

"Our results demonstrate the remarkable persuasive ability of conversational AI systems on political issues," said lead author Kobi Hackenburg of the University of Oxford.

The authors argue that chatbots can outmatch elite human persuaders in volume and speed, generating pages of arguments instantly. But they warn that optimizing for persuasion appears to trade off with truthfulness - a dynamic that could damage public discourse.

Why "more information" works

In back-and-forth chat, models can overwhelm users with relevant-sounding detail. That density can feel authoritative even when parts are wrong. The study's head-to-head shows it's not just content; it's the interactive format. Conversation outperformed a single, well-crafted message by a wide margin.

Risk surface

  • Misinformation at scale: If 1 in 5 claims are mostly inaccurate, high-volume persuasion multiplies errors.
  • Model trend: The most persuasive setups were the least accurate, with larger frontier models slipping further on factuality in persuasive mode.
  • Abuse potential: The paper flags scenarios where highly persuasive chatbots could promote radical ideologies or stir unrest.

Context you should factor in

  • Scope: All participants were adults in the U.K.; topics focused on British politics.
  • Comparators: The study did not pit chatbots directly against elite human persuaders.
  • Ecological validity: Outside a paid survey, many people won't sustain long political debates with bots.

Expert reactions

Shelby Grossman (Arizona State University) said the evidence suggests newer models are getting more persuasive. She noted both risks (foreign propaganda, social media division) and legitimate uses if political actors are transparent.

David Broockman (UC Berkeley) found it reassuring that the effect wasn't larger. He suggested that, in practice, competing persuasion could cancel out if both sides deploy comparable systems - while giving people more access to detailed arguments from multiple angles.

Implications for research, labs, and policy teams

  • Measure persuasion and factuality together. Add truthfulness and citation quality into eval suites for any persuasive or political use case.
  • Prefer retrieval-augmented setups with audited sources. Penalize unverifiable claims during post-training.
  • Tune for multi-objective performance. Don't optimize engagement or conversion without explicit factuality constraints.
  • Require evidence. Nudge models to provide verifiable references and expose confidence or uncertainty.
  • Throttle volume. Cap response length or apply cost to verbosity in sensitive domains to reduce "quantity over quality" effects.
  • Guardrails for political content. Clear disclosures, opt-in consent, rate limits, audit logs, and red-team tests targeting manipulative tactics.
  • Run A/Bs on formats. Compare conversational agents vs. static briefs for your domain to quantify real-world effects.

Politics and AI, beyond the lab

The study arrives as political actors experiment with AI: campaigns testing AI-generated outreach, leaders sharing AI-made media, and state-aligned operations from China and Russia pushing automated propaganda. Meanwhile, adoption climbs: 44% of U.S. adults report using tools like ChatGPT, Gemini, or Copilot sometimes or very often.

Source and further reading

Skill up on safe, evidence-driven AI

If your work touches model evaluation, policy, or RAG pipelines, a structured curriculum helps. See curated programs by role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide