Artificial Intelligence and the Rise of Slop: How AI Mash-ups Are Warping Truth and Scottish History

AI-generated biographies of Scottish politicians revealed glaring errors and falsehoods, showing these tools predict words without true knowledge. Writers must verify AI content carefully to avoid misinformation.

Categorized in: AI News Writers
Published on: Jul 14, 2025
Artificial Intelligence and the Rise of Slop: How AI Mash-ups Are Warping Truth and Scottish History

What AI Can’t Do: Lessons from Nicola Sturgeon and Robert Burns Mash-Ups

Artificial intelligence is being used to write biographies of Scotland’s politicians, but the results reveal just how limited these systems really are. Recently, AI-generated biographies of Nicola Sturgeon, John Swinney, and Humza Yousuf appeared on Amazon, only to be swiftly removed for violating guidelines. These works were riddled with bizarre phrasing, factual errors, and outright falsehoods—like claims that Yousaf came from poverty—highlighting that AI still struggles with accuracy.

So, what went wrong? These AI tools, often called “large language models” (LLMs), don’t truly understand language or facts. Instead, they predict word sequences based on vast collections of scraped texts. They have no awareness of meaning, truth, or context. Calling their mistakes “hallucinations” gives them too much credit; their errors are baked into how they operate.

A Personal Test of AI’s Limits

To test these limits, a prompt was given to an LLM: list the twelve books written by a specific author who actually had nine published titles. The AI invented three entirely new titles, complete with plausible publication dates. Among these was a bizarre mash-up of historical and cultural references: a fake book titled The Birth of a Nation (sharing a name with a controversial 1915 film) and Nine Inch Will Please a Lady, a title borrowed from a bawdy Robert Burns verse.

This mix-up happened because the AI matched the author’s Scottish background and themes of bisexual content in their work with the Bard’s risqué poetry, creating a fictional novel from unrelated pieces. The AI doesn’t reason or joke; it simply connects patterns without understanding.

Why Do AI Models Mix Up Authors and Facts?

LLMs are trained on enormous datasets scraped from the internet, often without consent or respect for copyright. For these models, there are no individual authors—only billions of words interwoven across sources. This lack of attribution causes confusion and errors. It’s no surprise that there are currently dozens of copyright lawsuits against AI companies in the US.

When asked to generate a synopsis for the non-existent “Nine Inch…” novel, the AI quickly fabricated a story about Tom, a talented male stripper afraid to come out to his parents, set against a backdrop reminiscent of films like The Full Monty and Billy Elliot. While creative, this mash-up is misleading and false, showing how AI can unintentionally spread misinformation.

The Broader Impact on Information and Education

These AI errors aren’t limited to publishing. They’re contaminating online content, historical knowledge, and news media. For example, AI-generated claims have absurdly suggested JFK used Facebook and Mother Theresa was active on Reddit. Even governments have been caught using fake AI-generated images with noticeable flaws.

With over 90% of students reportedly using LLMs for essays, citing non-existent sources risks undermining education. Recent studies reveal that newer AI models can fail factual accuracy tests 37% to 80% of the time—a troubling statistic for anyone relying on these tools for truthful information.

Why Writers Should Care

LLMs produce a “slop” of misinformation that can spread quickly. Think of society’s knowledge as a glass of water—adding even a drop of contaminated content pollutes the whole. Writers, educators, and researchers need to be cautious about AI-generated text and verify facts independently.

Before trusting AI for biographical content, historical facts, or research, ask: Is this information accurate? Does it come from a reliable source? The technology isn’t yet capable of fully replacing human judgment or expertise.

For writers interested in understanding AI’s capabilities and limitations in content creation, exploring specialized courses on AI tools and prompt engineering can provide practical insights. Resources like prompt engineering courses offer hands-on experience in guiding AI outputs more effectively.

Conclusion

AI can assist with many tasks, but when it comes to truth, context, and subtlety, it falls short. Its “intelligence” is pattern prediction without understanding, leading to errors that can misinform and confuse. Writers should treat AI-generated content as a starting point—always double-check, verify, and apply critical thinking.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide