Why AI Editors Could Flood Opinion Journalism With Mediocrity

The Washington Post’s plan to use AI for opinion content risks lowering editorial standards and flooding readers with predictable, formulaic writing. Human judgment remains essential for quality and trust.

Categorized in: AI News Writers
Published on: Jun 08, 2025
Why AI Editors Could Flood Opinion Journalism With Mediocrity

Will The Washington Post Embrace the AI Slush Pile?

Reducing human editorial judgment is the last thing opinion journalism needs. The Washington Post’s plan to dramatically expand its opinion section using AI raises important questions about quality, trust, and the future of editorial standards.

Early in my career, working at a literary agency, I handled the slush pile—the flood of unsolicited manuscripts and pitches. Most were unvetted and often unworthy of publication. This experience taught me the vital role editorial gatekeepers play: filtering out inaccurate, self-serving, or simply uninteresting content. Editors act as guardians, protecting readers from an overwhelming tide of mediocre writing.

That lesson is relevant today as The Washington Post explores a project called Ripple, aiming to scale opinion content outside the paywall. A key component is Ember, an AI writing coach intended to help “nonprofessional writers” submit op-eds. While expanding access sounds promising, the use of AI in this way risks lowering editorial standards.

The idea that anyone could write a good op-ed with AI coaching overlooks what makes opinion pieces valuable. Editors seek surprise: fresh analysis, unique perspectives, compelling personal stories. AI, built on large language models, tends to generate predictable, formulaic writing because it predicts the most likely next word based on existing data. This makes it ill-suited to produce the insightful, original content that opinion journalism needs.

AI can be a powerful tool in newsrooms, but it’s best used for tasks like data analysis, fact-checking, archiving, or brainstorming—not as a substitute for editorial judgment. For example, The Washington Post could use AI to make its vast archives more searchable and accessible, creating real value without compromising quality.

Since Jeff Bezos acquired The Washington Post in 2013, the paper’s commitment to readers over private interests has been questioned, especially after controversial editorial decisions. Now, the plan to use Ember to scale opinion content feels like a step toward turning the Post into a social blogging platform rather than a trusted journalistic institution.

Platforms like Medium or Substack provide space for diverse voices but aren’t held to the same editorial standards as newspapers. The Washington Post risks blurring the line between journalism and open publishing, potentially diluting its brand and trustworthiness.

In 2021, the Post announced plans to add dozens of editors, signaling growth and ambition. Today, those ambitions seem overshadowed by a push to scale volume over quality. If maintaining journalistic integrity matters, the Post should reconsider using AI to expand its slush pile and instead focus on preserving the editorial judgment that has earned its reputation.

Key Takeaways for Writers

  • Editorial judgment is crucial: AI cannot replace the human ability to spot originality, nuance, and depth in writing.
  • AI has its place: Use AI for research, data crunching, and supporting tasks, not for creating or heavily editing opinion content.
  • Quality over quantity: Expanding content rapidly with AI risks flooding platforms with generic writing that doesn’t engage or inform readers.
  • Know the limits of AI: Large language models generate predictable text and aren’t equipped to deliver fresh insights or emotional resonance.

Writers interested in understanding how to work alongside AI tools effectively may find valuable resources at Complete AI Training – Courses by Job. Learning to use AI as an aid rather than a replacement can enhance your craft without compromising originality.