Why the Real A.I. Threat to Journalism Is Bots Reading, Not Writing, the News
Major newspapers published AI-generated content filled with fake books and experts, revealing flaws in vetting. The real issue is AI reading news, reducing human traffic and ad revenue.

We’re Focused on the Wrong A.I. Problem in Journalism
It’s not bots writing the news. It’s the bots reading it.
Recently, two well-known American newspapers—the Chicago Sun-Times and the Philadelphia Inquirer—published a 56-page summer insert largely generated by A.I. The most glaring issue was a “summer reading list” recommending 15 books, five real and 10 entirely fictional. Titles like Tidewater Dreams by Isabel Allende and Nightshade Market by Min Jin Lee never existed. Even the experts quoted, such as Dr. Jennifer Campos, a supposed professor of leisure studies, couldn’t be verified. The entire section was essentially a sprawling hallucination produced by ChatGPT.
While this isn’t the first time major publications have used A.I. content, the scale and brazenness here are notable. The insert came from King Features, a syndication division of Hearst, which provides newspapers with comics, puzzles, and special sections. Many newspapers rely on such content without thorough vetting, assuming an editorial process is in place. This assumption proved false.
The section’s sole byline, from Chicago writer Marco Buscaglia, reveals much about the current state of journalism. Buscaglia, a veteran media professional in his mid-50s, used A.I. to handle an overwhelming freelance workload made up of low-paid gigs. His goal was to produce a large volume of content quickly, not to deceive. This highlights a cycle where shrinking paid writing jobs push writers to use A.I. just to keep up.
Despite its flaws, the insert was somewhat convincing. Neither Buscaglia nor the newspapers noticed the errors for days after publication. This episode reflects declining standards in print journalism, but it also points to larger issues facing the industry.
The Decline of Syndication and the Advertising Model
Syndication historically helped local newspapers provide popular content like comic strips, which attracted subscribers and advertisers. At its peak in 2005, advertising made up over 80% of newspaper revenue. But now, with most readers online and ads targeted at humans, A.I.-generated content can slip by unnoticed. For example, the Chicago Sun-Times insert had only one human-written ad, for a theater production, while the rest was A.I.-driven text.
This split audience—readers and advertisers—still exists in digital media. Advertising drives much of the internet’s business model, including giants like Google and Facebook. But as web traffic shifts from humans to automated systems, advertising’s effectiveness wanes. This threatens the economic foundation of online journalism and content creation.
How A.I. Reading Bots Are Changing the Game
Much attention focuses on A.I. tools that can write or illustrate content, but the bigger issue might be who’s consuming it. Increasingly, web traffic includes robot readers—A.I. systems scanning and processing content without human eyes. Sources might engage journalists just to influence how A.I. reads and summarizes their information, creating feedback loops.
Google’s A.I. Overview feature, which attempts to answer queries directly on the search page, is a case in point. Some publishers report declining traffic because users get answers without visiting their sites. When questions are asked inside an A.I. interface, users might never see original sources at all. This raises questions about who will fund online journalism if human visitors disappear.
At a recent developer conference, Google announced an “AI Mode” that may further reduce traditional browsing. Instead of pages of blue links, users could receive A.I.-generated videos, podcasts, charts, or entire apps tailored to their queries. This shift could sideline the conventional web experience and advertising model.
Google’s Project Mariner, which handles online transactions via A.I., is part of this future. It will be offered as a premium subscription service, signaling a move toward automated, A.I.-driven user interactions.
The Vicious Cycle Threatening Journalism
Analyst Ben Thompson outlines a cycle where declining human web traffic leads to less human-created content. Since digital ads depend on human viewers, A.I.-driven browsing undermines ad revenue and, over time, reduces new content supply. A.I. answers are powered by training on copyrighted human content, but if it struggles to interpret information accurately, the problem lies in its “reading” abilities.
This brings us back to the fake content in the Heat Index. Searching for a fictional book like The Last Algorithm by Andy Weir leads to unrelated or fraudulent results, and lookups for fake experts point back to the flawed insert. Such incidents hint at an internet increasingly filled with recycled, machine-processed fragments of past work.
For writers, this signals an important shift. The era of careful, human readers who can distinguish fact from fabrication is fading. Instead, content increasingly serves automated readers less capable of discernment. This challenges how journalists create, verify, and distribute their work going forward.
What Writers Should Consider
- Verify syndication and A.I.-generated content thoroughly before publishing.
- Understand how A.I. affects both content creation and consumption.
- Explore how changes in web traffic and advertising impact freelance opportunities.
- Adapt to new tools and workflows that incorporate A.I. without sacrificing accuracy.
Writers interested in practical skills on A.I. tools and their impact on content can explore focused courses available at Complete AI Training. Staying informed about how A.I. shapes reading and writing is crucial for sustaining quality journalism in this evolving landscape.