New York Times Cuts Ties With Freelancer Over AI-Generated Review
The New York Times has severed its relationship with freelance writer Alex Preston after discovering he used an AI tool to help write a book review, then failed to catch plagiarized passages the system pulled from another publication.
A reader flagged the issue in January. Preston's review of "Watching Over Her" by Jean-Baptiste Andrea contained substantial similarities to a Guardian review of the same book published months earlier. After the Times investigated, Preston admitted using AI to draft the piece and overlooked the copied sections.
The overlaps were direct. The original Guardian review described characters as "the lazy Machiavellian Stefano to hardworking Vittorio, whose otherworldly twin brother Emmanuele is prone to speaking in tongues and dressing up in ragtag begged-and-borrowed uniforms." Preston's version read: "the lazy, Machiavellian Stefano to Mimo's childhood friend and fellow craftsman Vittorio and Vittorio's otherworldly twin, Emanuele, who speaks in tongues and dresses in scavenged uniforms."
A Times spokesperson told The Wrap that the paper appended an editor's note to the review. "For staff journalists and freelance writers alike, reliance on AI and inclusion of unattributed work by another writer is a serious violation of The Times's integrity and fundamental journalistic standards," the statement said.
Preston said he was "hugely embarrassed" and had "made a serious mistake." The Times found no issues in his previous reviews for the publication.
Pattern Among Experienced Writers
The incident reflects a broader problem: even seasoned writers are making errors with AI tools. Preston is an accomplished author with six novels published and extensive bylines in major outlets including the Times, the Guardian, and the Financial Times.
Last month, Ars Technica fired a senior tech reporter who accidentally included AI-fabricated quotes in an article. The reporter said the error occurred after he asked an AI tool to generate notes.
These cases highlight how AI systems hallucinate and cobble together existing work without attribution-risks that persist regardless of a writer's experience or reputation.
Scrutiny Intensifies at Major Publications
The Times has faced mounting questions about AI usage across its newsroom. Earlier this month, readers accused a "Modern Love" column piece of sounding like AI-generated content.
A recent study published in The Atlantic examined AI detection software and found that opinion sections at outlets like the Times and the Wall Street Journal were six times more likely to contain AI-generated prose than their news articles. The authors concluded that major publications have likely published AI-written content at some point, knowingly or otherwise.
The Times columnist whose piece drew scrutiny later admitted using AI chatbots like ChatGPT as a "collaborative editor" for "inspiration and guidance and correction."
For writers using AI tools, the risks are clear: systems can introduce plagiarized material, fabricate information, and obscure attribution. Understanding how to use these tools responsibly-and when not to use them-has become essential to maintaining journalistic standards.
Learn more about AI for Writers and Prompt Engineering to use these tools effectively while maintaining ethical standards.
Your membership also unlocks: