Short Answers Increase AI Chatbot Hallucinations, Study Finds
Research shows AI models like ChatGPT hallucinate more when forced to give short answers. Brevity often sacrifices accuracy, especially on complex or vague topics.

Do Short Answer Requests Increase AI Hallucinations?
Answer: Yes.
Recent testing by Paris-based AI evaluation firm Giskard reveals a notable drawback when prompting AI chatbots like ChatGPT for concise answers. Their research shows that large language models (LLMs) are more prone to hallucinate—providing false or fabricated information—when asked to keep responses short.
The team compared AI responses to prompts explicitly requesting brevity against those with neutral or more open-ended instructions. They discovered that when the AI is pushed to be brief, it tends to sacrifice accuracy. This issue becomes more pronounced with vague questions or when discussing ambiguous topics, where nuance and clarification are vital.
Why Does This Happen?
Giskard's researchers suggest that limiting an AI’s response length restricts its ability to include disclaimers, contextual details, or corrections. In other words, a request for conciseness can cut off the AI’s opportunity to acknowledge uncertainties or debunk misinformation within its answer.
They observed that "when forced to keep it short, models consistently choose brevity over accuracy." This insight serves as a caution for developers and users alike: seemingly harmless prompts like “be concise” might unintentionally undermine an AI model’s reliability.
Practical Takeaways
- Avoid requesting overly brief answers when accuracy matters, especially on complex or ambiguous topics.
- Allow AI responses some flexibility in length to accommodate necessary clarifications or caveats.
- Developers should be cautious with system prompts emphasizing conciseness, as they may inadvertently increase hallucination risks.
This finding highlights an important balance: brevity versus precision. While short answers seem efficient, they may come at the cost of trustworthy information.
For those interested in deepening their expertise in AI prompt strategies and reducing hallucinations, exploring detailed AI training resources can be helpful. Check out comprehensive courses on prompt engineering to learn how to craft effective prompts that optimize accuracy.