News outlets remove dozens of articles after AI-generated fake authors go undetected

Fake authors, fabricated sources, and undisclosed AI writing have become routine problems for publishers in the UK and US. Major outlets including Wired, Business Insider, and the Chicago Sun-Times have all pulled stories after being misled.

Categorized in: AI News Writers
Published on: Apr 25, 2026
News outlets remove dozens of articles after AI-generated fake authors go undetected

AI-Generated Fake Authors Are Now a Routine Problem for Publishers

News outlets across the UK and US are removing articles at an accelerating pace after discovering they published work by authors who don't exist, wrote using AI without disclosure, or fabricated sources entirely. The pattern has become systematic enough that tracking these incidents is now standard practice for major publishers.

The Mississippi Free Press discovered in April 2026 that an opinion column published on its site was written with AI by someone posing as a freelancer. The editor only grew suspicious when the author submitted an invoice with a different name. He found the headshot was also AI-generated, and three other planned columns from similar fake accounts were pulled before publication.

This mirrors what happened at Wired and Business Insider in 2025, when a freelancer using the name Margaux Blanchard submitted articles containing case studies of people who could not be verified to exist. Wired took down its story after receiving an unusual payment request. Business Insider eventually removed 38 essays after investigating the contributor's identity.

The New York Times ended its relationship with a freelance book reviewer in March 2026 after he admitted using an AI tool that incorporated text from a Guardian review without attribution. The AI incorporated material he failed to identify and remove.

The Detection Problem

Editors say spotting AI work before publication remains difficult. The Mississippi Free Press editor noted that "AI detectors aren't very reliable." At Crikey, an Australian news site, an AI professor had to flag an article on LinkedIn before the outlet discovered a contributor had used ChatGPT for editing, proofreading, and rephrasing-all against editorial policy.

Crikey removed four articles and acknowledged it should have sent contributors its editorial guidelines before accepting work. The Saturday Paper, which published another piece by the same contributor, added a disclosure noting limited ChatGPT use for research.

Fake Experts and Fabricated Sources

Beyond fake bylines, publishers have been misled by fake experts quoted in articles. Press Gazette revealed in April 2025 that companies selling CBD oil, vapes, and essay-writing services were using AI to create expert personas and game search rankings. Dozens of stories were removed or amended after the pattern emerged.

The Journal of the Law Society of Scotland removed an article after discovering quotes were "falsely attributed" and "likely fabricated." The editor said the publication fell "well below" its standards and contacted those misquoted to apologize.

Broader Systemic Failures

Some outlets published AI work without any author deception involved. CNET removed articles it had generated with an internal AI tool in January 2023 after discovering factual errors. The site issued corrections on 41 of 77 stories. The Chicago Sun-Times and Philadelphia Inquirer both published a summer reading list containing books that don't exist, created by a freelancer using an AI agent without disclosure.

A network of gaming websites-The Escapist, Videogamer, and Esports Insider-were taken over in early 2026 and began publishing AI-written casino stories with fake author profiles and generated headshots.

What Editors Are Doing Now

Publishers are tightening verification processes. The Mississippi Free Press is developing a formal AI policy and training staff to spot AI use. Business Insider said it bolstered verification protocols. The Grind magazine in Toronto strengthened vetting of new writers and began checking for AI-generated content early in the editing process.

Reach, the UK's largest commercial publisher, acknowledged the problem is becoming more sophisticated. Its chief content officer said the industry needs to work together on new controls and protocols.

For writers, the pattern is clear: disclosure matters. When outlets knew AI was used-as with the Saturday Paper's disclosure or CNET's revised bylines-trust remained intact. When AI use was hidden or authors fabricated, publications faced retractions and credibility damage.

AI for Writers training can help you understand where these failures occur and how to use AI tools responsibly within editorial standards.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)