AI Outpersuades Humans: What UBC Found and How to Stay Safe

UBC research finds GPT-4 sways lifestyle choices more than humans. Scale, tone, and concrete tips boost influence, raising misuse risks and calls for guardrails.

Categorized in: AI News Science and Research
Published on: Oct 02, 2025
AI Outpersuades Humans: What UBC Found and How to Stay Safe

AI Persuasion Beats Humans: What the Data Says and How to Respond

Large language models are proving more persuasive than humans. New research from UBC, led by Dr. Vered Shwartz, shows that systems like GPT-4 can sway lifestyle choices more effectively than people. Faster output, broader vocabulary, and instant access to resources make a difference-and they raise real risks for misuse.

Why this matters

LLMs already draft content that influences decisions across art, marketing, and news. Scale is the multiplier: one person with an AI can push thousands of tailored messages in minutes. The debate over whether AI will be used for persuasion is over; the focus now is on guardrails and accountability.

Inside the study

Researchers asked 33 participants to consider three lifestyle decisions: going vegan, buying an electric car, or attending graduate school. Each participant chatted with either a human persuader or GPT-4. Both received basic persuasion tips, and the AI was told not to disclose it was a system. Participants reported their likelihood to change before and after the chat.

GPT-4 was more persuasive across all topics, with the strongest effects on veganism and attending graduate school. Human persuaders had an edge in asking probing questions to learn about the participant.

What makes AI persuasive

  • Volume and variety: GPT-4 wrote about eight sentences for every two from humans, stacking arguments and angles.
  • Concrete help: It offered practical next steps-specific vegan brands, programs to consider, or application tips.
  • Language cues: It used more longer words (e.g., longevity, investment), which read as authoritative.
  • Tone: More agreement, more pleasantries, and conversations people described as more pleasant.

Risks and safeguards

As AI gets better at persuasion, detecting it will get harder. AI can still hallucinate, and the summary at the top of a search page might be wrong. Education and source checking are now basic safety steps for any informed user.

  • AI literacy for teams and students: how models are trained, where they fail, and how to verify claims.
  • Critical review: if something looks too good or too bad to be true, verify before you share or act.
  • Source hygiene: prioritize known, credible outlets; track provenance when possible.
  • Built-in safeguards: warning systems if users write harmful or suicidal text, plus clear escalation paths.
  • Better guardrails before monetization: test for misuse and falsehoods at scale, then ship.
  • Diversify approaches: explore methods beyond generative models to reduce single-point failure modes.

For governance frameworks, see the NIST AI Risk Management Framework and the EU AI Act overview.

Action items for research and product teams

  • Define approved use cases for persuasive AI; block high-risk scenarios (health, finance, political targeting) without oversight.
  • Add friction for mass outreach: rate limits, anomaly detection, and audits for scripted persuasion.
  • Make disclosure the default when AI drafts content; log who saw what, when, and why.
  • Red-team for manipulation, hallucinations, and synthetic social proof; publish findings.
  • Integrate fact-checking and citation prompts; prefer retrieval with trusted sources.
  • Implement risk reviews before feature launches; include social scientists and ethicists.
  • Monitor for harmful language and provide crisis resources or handoffs where appropriate.

If you're building AI literacy programs, explore practical training options here: AI education resources.

Contact: UBC Public Affairs - alex.walls@ubc.ca