OpenSlopware called out bot-written code in open source - hounded offline, revived by forks

OpenSlopware flagged repos touched by LLMs, then vanished after harassment; forks keep it alive. The uproar spotlights licensing, quality, and why teams need clear AI policies.

Categorized in: AI News IT and Development
Published on: Jan 19, 2026
OpenSlopware called out bot-written code in open source - hounded offline, revived by forks

OpenSlopware: The list that named LLM-touched open source-and why it vanished

A short-lived project called OpenSlopware tracked open source repos that use LLM-generated code or accept AI-assisted pull requests. After heavy harassment, the original maintainer pulled it, scrubbed social accounts, and the URL now 404s. Forks exist and are being consolidated by others who want the list to live on.

Whether you're for or against AI coding tools, the reaction says a lot about where software is right now: polarized, noisy, and high-stakes for maintainers trying to protect code quality and licenses.

What OpenSlopware did

The repository listed projects that either integrate LLMs, ship code produced by them, or show signs of automated assistants in their PR history. Think bot-authored commits, AI-suggested patches, or direct LLM integrations. It was plain text in a Git repo-easy to clone, easy to fork-so copies survived the takedown.

After the takedown

Several forks appeared on Codeberg, including one maintained under "Small-Hack." Some people who were involved early have since apologized and don't want a revival, but others are merging efforts to maintain a single, active list. The debate is no longer about existence-it's about stewardship and scope.

The growing "slop" backlash

"Slop" has become shorthand for low-quality AI output shipped as if it were original work. Communities pushing back include the AntiAI subreddit and the Lemmy instance Awful.systems. One of its admins, David Gerard, says they plan to curate a list similar to OpenSlopware and are searching for a better name.

If you've been heads down shipping code, you might be surprised by the intensity. But this is one of the most contentious topics in software right now.

Why maintainers and teams should care

  • Licensing and provenance: LLM training data and generated code raise questions about copyright, license compatibility, and attribution. If you can't trace it, you can't ship it with confidence.
  • Environmental cost: Training and large-scale inference consume significant energy. See the overview: Environmental impact of artificial intelligence.
  • Productivity vs. quality: In testing reported mid-year, developers felt faster with coding assistants, but the debugging overhead canceled the speed gains. Perception didn't match outcomes-code quality and cycle time suffered.
  • Team effects: Unknown long-term impacts on reasoning, messy hiring signals, and pressure to do more with less. Claimed gains are uneven at best.

If your project accepts contributions

  • Add an AI policy to CONTRIBUTING.md: what's allowed, what disclosure is required, and what evidence you expect (prompts, diffs, references).
  • Require authors to label AI-assisted PRs. Treat generated code as untrusted: extra review, tests, and static analysis.
  • Scan licenses on inbound code and dependencies. If provenance is unclear, reject or request a rewrite.
  • Automate quality gates: coverage thresholds, linters, type checks, performance budgets, and reproducible builds in CI.
  • Document decisions. Keep a paper trail for legal, security, and future maintainers.
  • Moderate with care. Don't tolerate harassment-from either side. Enforce your code of conduct consistently.

If you use AI coding assistants

  • Start with tests and specs. Make the assistant satisfy your constraints-not the other way around.
  • Keep prompt and output logs. You'll need them for provenance and review.
  • Rewrite or heavily edit anything license-ambiguous. Don't copy verbatim chunks you can't justify.
  • Benchmark. Compare assistant-on vs. assistant-off on defect rate, review cycles, and incident count-not vibes.
  • Ship small, reviewable PRs. Make it obvious what was generated and why it's correct.

Context that's easy to miss

The heat isn't just about taste. It's about legal exposure, maintenance burden, and trust. A single AI-lifted code block with incompatible licensing can taint a repo; a flaky assistant-generated fix can add months of churn.

On the social side, open criticism is necessary-especially when metrics show weak or negative ROI. Bans and witch hunts don't help, but silence won't protect your project either.

Useful references

Want structure for responsible adoption?

If your team is evaluating AI coding tools, use a defined playbook. Establish policy, measurement, and review upfront, then pilot in a narrow scope before scaling.

For a practical curriculum on safe workflows and guardrails, see this path: AI Certification for Coding.

Bottom line

Forks may keep OpenSlopware's idea alive, with better names and better governance. Regardless of where you stand on LLMs, set policies, track outcomes, and keep your codebase defensible. Open criticism plus hard metrics beats hype-every time.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide