Pro-AI attitudes can increase susceptibility to misleading AI advice, study finds
A new study suggests people who feel positively about AI are more likely to be misled by it-at least in face-authenticity judgments. The work, by researchers from Lancaster University, Cognitive Consultants International (CCI-HQ), and the UK Defence Science and Technology Laboratory, tested how guidance labeled as "AI" or "human" influenced decisions.
Nearly 300 participants judged 80 faces (half real, half synthetic). Before each decision, they received short text guidance predicting "real" or "synthetic," attributed either to an AI system or to human experts. The twist: guidance accuracy was fixed at 50% without participants' knowledge.
Key finding
Participants with more positive attitudes toward AI showed worse discrimination between real and synthetic faces, but only when the guidance source was labeled "AI." In other words, optimism about AI correlated with overreliance on AI-labeled cues-even though those cues were as likely to be wrong as right.
Researchers also measured general trust in people (Human Trust Scale) and attitudes using the General Attitudes towards Artificial Intelligence Scale (GAAIS). The bias effect tied specifically to pro-AI attitudes in the AI-guidance condition.
Why this matters for science and research workflows
Decision support tools can preload your judgment. Labeling a cue as "AI" can act as a cognitive shortcut, especially for users inclined to view AI favorably. That shortcut can reduce sensitivity to ground truth when the tool's reliability is variable or unclear.
If your lab uses AI-assisted screening (image triage, anomaly flags, identity checks), you may see similar trust-calibration issues. The result is an illusion of certainty: faster decisions that feel confident, but with no net accuracy gain when reliability is unknown or mixed.
What the study did
- Participants: ~300 adults.
- Task: Judge 80 faces (40 real, 40 AI-generated) as "real" or "synthetic."
- Guidance: Short text, attributed to either "AI" or "human experts," presented before each judgment; true half the time.
- Measures: Human Trust Scale and GAAIS to profile trust and AI attitudes.
Participants were unaware that both the real/synthetic mix and the correct/incorrect guidance were controlled.
Practical steps you can apply now
- Remove source labels during testing: Evaluate guidance quality blind to whether it came from "AI" or "human." Add labels only after you've quantified reliability.
- Report reliability transparently: For any AI cue, display current precision/recall with confidence intervals and sample size. Update these numbers in production.
- Force independent first judgments: Capture a user's decision before showing AI guidance. Then show the cue and allow a revision. Log deltas.
- Throttle influence: Visually down-weight low-confidence AI predictions; hide them when below a pre-registered threshold.
- Counter-bias training: Include modules that surface common AI-induced biases (automation bias, confirmation bias) and rehearse debiasing checklists.
- Audit by attitude strata: Segment performance by users' AI attitude scores to identify overreliance pockets in your team.
- Use calibrated ensembles: Where feasible, combine multiple independent detectors and expose disagreement explicitly.
Method notes and caveats
- Domain-specific: The task was face authenticity. Effects may differ in text, audio, or scientific image analysis.
- Artificial constraint: Guidance accuracy was exactly 50%. Real systems vary; still, the core risk-overweighting AI-labeled cues-generalizes.
- Measurement: Attitudes were captured via validated scales (e.g., GAAIS), which aids interpretability but doesn't replace direct behavioral auditing in your context.
- Stimuli: Real faces were drawn from the Flickr-Faces-HQ (FFHQ) dataset by NVIDIA (CC BY-NC-SA 4.0). See dataset info here.
What the authors emphasize
The team cautions that AI decision aids can bias human judgment in subtle ways and may impair decision quality if users treat AI labels as a shortcut for truth. The effect was strongest among participants who already viewed AI positively, underscoring the need for trust calibration-enthusiasm without verification is a liability.
For teams building or deploying AI decision support
- Publish a model card with known failure modes, reliability by subgroup, and guidance on safe use.
- Include a "no-influence" baseline in every A/B test to quantify net value beyond placebo guidance.
- Track influence metrics: How often does AI advice flip a correct answer to an incorrect one (and vice versa)? Optimize for net benefit, not just adoption.
- Institutionalize second looks: For high-stakes calls, require a human peer review that is blind to the initial AI cue.
Links and further reading
- Journal venue: Scientific Reports (Nature Research) journal page.
- Skills development for research teams working with AI tools: courses by job role.
Institutions involved: Lancaster University; Cognitive Consultants International (CCI-HQ); UK Defence Science and Technology Laboratory.
Your membership also unlocks: