South Africa pulls AI policy draft over fake citations as a news site runs almost entirely on AI-generated content

South Africa scrapped its national AI policy after fake citations slipped through unchecked. Separately, a site posing as journalism published content that was 97% AI-generated with no real reporters behind it.

Categorized in: AI News Writers
Published on: May 01, 2026
South Africa pulls AI policy draft over fake citations as a news site runs almost entirely on AI-generated content

Two AI Mishaps Expose the Cost of Weak Oversight

South Africa scrapped its national AI policy draft after discovering fictitious citations in the references. A news website claiming to employ journalists relies almost entirely on AI-generated content. Both incidents happened in the real world within months of each other.

The first case drew attention from Reuters. South Africa's minister of communications and digital technologies, Solly Malatsi, withdrew the draft policy and acknowledged the failure publicly. "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened," he said.

Malatsi called it an "unacceptable lapse" of integrity and credibility. He added: "It proves why vigilant human oversight over the use of artificial intelligence is critical." The admission stands out-most institutions caught in similar situations deflect rather than accept responsibility.

The Wire by Acutus: AI masquerading as journalism

The second case involves The Wire by Acutus, a website launched in 2025 that publishes roughly 100 articles on technology, science, business, and healthcare. It has no masthead, no journalist bylines, and no verifiable address.

The site describes itself as a "collaborative journalism platform" where contributors share perspectives that are "synthesized and edited into stories." ModelRepublic, a publication monitoring AI transparency, ran the content through Pangram, an AI detection tool claiming 99.98% accuracy.

The results: 69% of 94 articles flagged as fully AI-generated. Another 28% came back as partially AI-generated. Only three articles were classified as human-authored.

Journalist Tyler Johnston investigated the site's source code and found connections to political PR work. Patrick Hynes, president of PR firm Novus Public Affairs, engaged with the site's content on X. His firm counts OpenAI among its clients for lobbying efforts in Washington.

The hallucination problem spreads

AI hallucinations-plausible-sounding but false statements generated by language models-have become a recurring liability. Encyclopaedia Britannica sued OpenAI, claiming hallucinations deprived readers of reliable content. Top AI startups face multiple lawsuits over chatbots that prioritize appearing credible over being accurate.

The risks extend beyond policy documents and news sites. AI systems have generated deepfake videos of India's Army Chief. Voice cloning technology can steal recordings. AI-generated sexual imagery spreads without consent.

For writers, the stakes are direct. AI for Writers courses must address these failures. Understanding where AI succeeds and where it fails is essential work. ChatGPT Courses & Certifications that teach verification and human oversight matter more now than they did a year ago.

South Africa's minister was right about one thing: human oversight isn't optional. It's the difference between a policy that serves a country and one that embarrasses it. It's the difference between journalism and automated content that wears a journalist's mask.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)