Elon Musk’s AI Chatbot Grok Spread Misinformation During Israel-Iran Conflict, Report Finds
A report reveals Elon Musk's AI chatbot Grok spread misinformation during the recent Israel-Iran conflict, misidentifying fake videos as real. Users should verify critical info from multiple trusted sources.

A recent report highlights serious shortcomings in Grok, Elon Musk's AI chatbot integrated into X, when it comes to verifying information about the recent 12-day conflict between Israel and Iran, which lasted from June 13 to June 24, 2025. Researchers from the Atlantic Council's Digital Forensic Research Lab (DFRLab) analyzed 130,000 posts generated by Grok related to the conflict and found many inaccuracies and inconsistencies.
About one-third of these posts attempted to fact-check misinformation circulating about the conflict, including unverified social media claims and videos. Despite Grok not being designed as a fact-checking tool, many X users increasingly rely on it to clarify and verify breaking news, especially during crises. However, X itself lacks a formal third-party fact-checking system and depends on community notes for context.
Grok's Trouble with Fact-Checking and Visual Verification
The report reveals that Grok struggled to distinguish between authentic and AI-generated content. For example, it misidentified two AI-generated videos as real footage from the conflict. One video, which appeared to show damage to Tel Aviv's Ben Gurion Airport after an Iranian missile strike, was falsely labeled by Grok with conflicting descriptions within minutes—first attributing the damage to a Houthi missile strike in May 2025, then to Israeli airstrikes on Tehran’s Mehrabad Airport in June 2025.
Additional viral AI-generated videos were also wrongly classified by Grok as authentic. These included supposed strikes on Iran’s Arak nuclear plant and on Israeli sites such as the port of Haifa and the Weizmann Institute in Rehovot. The chatbot further spread misinformation by associating unrelated footage to the conflict, such as a video of festival-goers in France being described as Israelis fleeing at the Taba border crossing with Egypt, and an explosion in Malaysia misrepresented as an Iranian missile hitting Tel Aviv.
AI Chatbots Amplify Falsehoods Amid Conflict
The surge of misinformation during the conflict was amplified not only by Grok but also by other AI chatbots like Perplexity. One notable false claim was that China sent military cargo planes to support Iran. This misinformation arose from misinterpreted flight data and was picked up by some media outlets before being further spread by AI-driven tools.
Experts warn that while chatbots pull information primarily from media sources, they struggle to keep pace with fast-moving events during global crises. Their limitations and biases can inadvertently shape public perception by amplifying false or misleading narratives.
Key Takeaways for Users and Developers
- AI chatbots like Grok are not substitutes for professional fact-checking and should be used cautiously during fast-moving events.
- Misinformation can easily be amplified by AI tools that rely on incomplete or unverified data sources.
- Platforms without dedicated fact-checking mechanisms risk becoming hotbeds for misinformation, especially during conflicts.
- Users should cross-verify critical information from multiple reliable sources before accepting AI-generated responses as accurate.
For those working with AI or relying on it for information verification, understanding these limitations is crucial. To enhance your skills in AI tools and fact-checking techniques, explore comprehensive training options at Complete AI Training.
```