Kurdish journalists warn of AI-generated fake content in Iran conflict
The Kurdistan Journalists Syndicate's al-Sulaymaniyah branch called on media outlets Sunday to verify all information before publishing, citing widespread circulation of AI-generated images and videos designed to mislead audiences during the Iran war.
Advanced AI techniques now produce fake and deepfake material so realistic that distinguishing them from authentic footage has become difficult. Hundreds of fabricated images and videos circulate daily across screens and mobile devices, the syndicate said.
The union urged journalists to avoid sharing any content that raises doubts. "If verification is not possible, it is preferable not to publish," the statement read.
Verification becomes essential
The warning reflects a broader pattern. AI has been deployed in conflicts involving Iran, Israel, and the United States as a tool to influence public perception and spread misleading narratives, according to the syndicate.
The group emphasized the need to strengthen public media literacy. Audiences should approach content with a critical and analytical perspective, it said.
For writers and journalists, this means building verification processes into daily work. Checking sources, confirming visual content, and understanding how generative video technology works are no longer optional skills.
The syndicate's guidance applies beyond conflict coverage. As AI for writers becomes more accessible, newsrooms face pressure to move faster while maintaining accuracy. The two demands now require deliberate editorial choices.
Your membership also unlocks: