Why Relying on ChatGPT Led Executives to Make Worse Stock Predictions Than Talking to Peers
Executives using ChatGPT to predict Nvidia’s stock price became more optimistic but less accurate. Peer discussions led to more cautious and precise forecasts.

Cognitive Bias Research: Executives Using Gen AI Made Worse Predictions
Nearly 300 executives and managers were asked to predict Nvidia’s stock price one month ahead after seeing recent trends. Half the group could consult ChatGPT, while the other half discussed forecasts with peers. The results revealed a surprising pattern: AI consultation led to more optimistic but less accurate predictions, while peer discussion encouraged caution and improved accuracy.
The Experiment Setup
The study took place during AI executive-education sessions between June 2024 and March 2025. Participants saw Nvidia’s recent stock price chart, which had been rising sharply due to its role in AI technology. Each gave an initial private forecast for the stock’s price in one month.
Next, participants split into two groups:
- Peer discussion (control group): Small group conversations with no AI, mimicking traditional collaborative decision-making.
- ChatGPT consultation (treatment group): Participants could ask ChatGPT anything about Nvidia’s stock but were not allowed to talk with peers.
After these consultations, everyone submitted a revised forecast.
Key Findings
- AI increased optimism: Those using ChatGPT raised their price estimates by about $5.11 on average.
- Peers encouraged caution: The peer group lowered their forecasts by around $2.20 on average and were more likely to stick to or reduce their initial estimates.
- Accuracy suffered with AI: Both groups were overly optimistic, but the ChatGPT group’s predictions were less accurate after consultation. Peer discussions improved prediction accuracy.
- Overconfidence rose with AI: ChatGPT users increased their tendency to make overly precise forecasts (pin-point predictions), a known sign of overconfidence, while peer discussions reduced it.
Why Did AI Lead to Overconfidence and Optimism?
Several factors explain why consulting ChatGPT produced these outcomes:
- Extrapolation bias: ChatGPT likely extended Nvidia’s recent upward trend without accounting for possible reversals, promoting “trend riding.”
- Authority bias: The AI’s confident, detailed responses made executives trust its optimistic forecasts more than their own judgments.
- Emotionless analysis: Humans’ natural caution and gut instincts were missing in AI advice, removing skepticism about soaring stock prices.
- Peer calibration: Group discussions introduced diverse viewpoints and social checks, leading to more conservative consensus and dampening extreme optimism.
- Illusion of knowledge: Access to vast AI-generated data gave a false sense of certainty, increasing overconfidence among users.
Practical Lessons for Executive Use of AI
These insights offer important guidance for leaders integrating AI into decision-making:
- Recognize AI biases: AI tools can skew forecasts toward optimism and inflate confidence. Always question the data sources and assumptions behind AI-generated predictions. Ask the AI to explain potential errors or risks.
- Value human discussion: Peer conversations remain essential as a reality check. Combining AI input with team debate can balance AI’s strengths with human judgment.
- Maintain critical thinking: Treat AI advice as a starting point, not a final answer. Whether from AI or colleagues, probe the basis of forecasts and challenge assumptions.
- Set clear AI guidelines: Encourage the use of AI alongside peer review and scenario analysis to avoid overconfidence. Training and protocols can help teams resist blindly accepting AI output.
This study was limited to Nvidia stock and a one-month forecast in a controlled setting, so real-world results may vary. Also, the ChatGPT model used did not have the latest market data, which might influence accuracy.
For leaders, the takeaway is clear: AI is a powerful tool but not a substitute for human judgment. Use it to augment, not replace, the critical conversations and skepticism that lead to better decisions.
Explore more about effectively integrating AI into management practices at Complete AI Training.