Major Newsrooms Struggle With AI Disclosure as Detection Tools Fail
The New York Times published a first-person essay in November that used artificial intelligence as a "collaborative editor," but disclosed nothing to readers. The Atlantic revealed the undisclosed use in March. The Times said the column met its standards and left it online without correction.
Days later, The Times added an editor's note to a book review published in January. The freelance reviewer had used an AI tool that incorporated material from a Guardian review without attribution or removal. The Times called this "a clear violation" of its standards.
Neither incident would have passed at The Globe and Mail. The newsroom's AI policy, updated last fall, prohibits staff and contributors from using AI to edit or write any part of a story.
Detection Tools Are Unreliable
AI detectors exist, but they don't work consistently. Three different detection tools analyzing the same article often produce three different assessments of how much AI generated it.
"One challenge with AI detection is that the tools involved, much like the models they analyze, are still evolving," said Vauhini Vara, who investigated the issue for The Atlantic. "Sometimes they flag false positives or fail to catch AI-generated material."
Testing multiple detectors myself confirmed this problem. The inconsistency makes them unreliable for preventing publication of undisclosed AI content.
Newsrooms Tightening Contributor Agreements
The Globe and Mail requires contributors to departments including Opinion, First Person, and Lives Lived to attest their work is original and created without artificial intelligence. The Atlantic requires contributors to attest they are the "sole author" and forbids AI-generated writing or imagery without approval and disclosure.
The Local magazine, a publication focused on Canadian stories, discovered a pitch from someone claiming to be "Victoria Goldiee," a writer supposedly with Globe credits. The pitch had never been greenlit. When pressed for interview samples and published work, the emails revealed AI-generated phrasing: "This story matters because of... It is timely because of... It fits your readership because of..."
The Local responded by writing an AI policy and amending contracts to explicitly prohibit generative AI in story creation. The magazine also tightened fact-checking by requiring annotated drafts from all writers.
The Freelancer Problem
The changes have created an unintended consequence. Nicholas Hune-Brown, executive editor of The Local, stopped accepting public pitch calls through social media.
"Whenever a call for pitches goes public, an editor's inbox becomes absolutely inundated with AI garbage from around the world," he said. "It becomes impossible to wade through all the BS. This is a brutal situation for actual human freelancers."
The Local now sends pitch calls only to a list of potential contributors. Finding new writers and reaching underrepresented communities in Canadian journalism remains a work in progress.
How Newsrooms Verify Contributors
The Globe and Mail maintains a large staff of writers who annually renew their pledge to follow the Editorial Code of Conduct. The newsroom also has established relationships with trusted freelancers but remains open to new pitches.
New writers can find assigning editor email addresses on The Globe's Contact Us page. Editors convey the AI policy and verify that new contributors are who they claim to be. This might include a video call before locking in an assignment.
For writers looking to understand how AI tools work and their proper role in journalism, AI for Writers covers ethical use of AI in content creation. Those wanting deeper technical knowledge should explore Generative AI and LLM fundamentals.
Your membership also unlocks: