New York Times drops freelance reviewer who used AI tool that copied Guardian critic's work

The New York Times dropped a freelancer after his AI-assisted book review copied unattributed passages from a Guardian review of the same book. The writer admitted he failed to spot language the AI tool pulled from the competing piece.

Categorized in: AI News Writers
Published on: Apr 01, 2026
New York Times drops freelance reviewer who used AI tool that copied Guardian critic's work

New York Times cuts ties with freelancer over AI-assisted review

The New York Times has ended its relationship with a freelance journalist after he used artificial intelligence to write a book review that incorporated unattributed passages from a competing publication.

A reader flagged similarities between the Times' January review of "Watching Over Her" by Jean-Baptiste Andrea and an August review of the same book published in the Guardian. The Times launched an investigation and discovered the freelancer had used an AI tool that pulled language directly from the Guardian piece without his knowledge or removal.

The overlapping text included character descriptions and the review's conclusion. The Guardian review described a character as "lazy Machiavellian Stefano"; the Times version rendered it as "lazy, Machiavellian Stefano." The closing assessment-describing the novel as a "love song to a country of contradictions: battered, divided, misguided and miraculous"-appeared in nearly identical form in both reviews.

The freelancer admitted the mistake in a statement to the Guardian. "I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in," he said. "I am hugely embarrassed by what happened and truly sorry."

He told the Times he had not used AI on any of his other six reviews for the publication, submitted between 2021 and 2026.

The Times added an editor's note to the review acknowledging the AI use and linking to the Guardian piece. "His reliance on AI and his use of unattributed work by another writer are a clear violation of the Times's standards," the note read.

What this means for writers using AI

The incident underscores a practical problem: AI tools trained on published work can reproduce that work's language and structure without obvious attribution. A writer may not spot what the tool has incorporated, especially under deadline pressure.

For writers working with AI, this case demonstrates the need for careful review of generated text against original sources. The responsibility falls on the writer to verify that AI-assisted drafts don't contain borrowed material-whether obvious or subtle.

Learn more about responsible AI use in writing through resources on AI for Writers and Prompt Engineering, which cover techniques for controlling AI output and maintaining editorial standards.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)