The AI Plagiarism Problem That's Testing Newsrooms
The New York Times fired a freelance writer last week after discovering his book review contained passages nearly identical to an earlier review published in The Guardian. The writer, Alex Preston, admitted he used AI to assist with the piece and called it "a serious mistake."
The incident threatens to reverse momentum that journalists have been building around AI use. Recent coverage in the Wall Street Journal and Wired highlighted how reporters-including New York Times staff and independent writers-are using AI to boost productivity and handle editorial tasks. Some are writing multiple stories daily with AI assistance.
But the Preston case hands ammunition back to skeptics. Many newsrooms already restrict AI use, and this plagiarism incident could push others to adopt blanket bans rather than develop clear policies.
Where the real problem lies
The issue isn't that AI was used. It's how it was used-and what Preston failed to do afterward.
Preston didn't fact-check his output. He didn't catch that his AI-generated text matched existing published work. He didn't apply basic editorial judgment to what the system produced.
The broader lesson for writers and editors: AI failures in journalism usually stem from weak human oversight, not from using the tool itself. CNET's bot-written articles and the Chicago Sun-Times' fabricated book titles followed the same pattern-AI systems were given insufficient parameters and nobody verified the results.
How to use AI without repeating mistakes
The distinction matters. Rather than asking whether humans should oversee AI (they should), the real question is: which specific decisions should you delegate to AI, and what guardrails do you need?
For a book review, AI might help with structure or initial drafting. But fact-checking, originality verification, and final editorial judgment must stay with the writer. Those aren't negotiable.
Understanding how to set proper parameters and restrictions when using AI is central to avoiding these failures. The tool does what you ask it to do-but you have to ask the right questions and verify the answers.
Newsrooms considering broader AI adoption should learn from Preston's mistake without overcorrecting. The answer isn't to ban AI outright. It's to define exactly what tasks AI handles and build verification steps around them.
For writers specifically, exploring how AI can support your work without replacing your judgment is where the real opportunity lies.
Your membership also unlocks: