AI bots posing as humans spread crypto scams and manipulate social media algorithms, researchers find

Over 1,000 AI-powered bot accounts were caught spreading crypto scams on social media after they accidentally posted ChatGPT's refusal messages. Researchers say the "fox8" network is likely a small slice of a far larger problem.

Categorized in: AI News Science and Research
Published on: Mar 23, 2026
AI bots posing as humans spread crypto scams and manipulate social media algorithms, researchers find

Researchers Uncover Thousands of AI Bots Manipulating Social Media

Academics studying social media platforms have identified a network of over 1,000 bot accounts that used artificially generated content to spread crypto scams and manipulate engagement algorithms. The discovery, made in mid-2023, reveals how AI-powered bots can deceive recommendation systems and accumulate influence by posing as human users.

The botnet, dubbed "fox8" after a fake news website it amplified, operated by creating realistic conversations between bot accounts and with real users. This artificial engagement tricked the platform's algorithm into promoting the bots' posts to wider audiences.

How Researchers Identified the Network

The accounts were caught because their creators made a critical error: they failed to filter out self-revealing text generated by ChatGPT. When the language model refused requests that violated its content policies, it produced standard rejection messages that exposed the bots' artificial nature.

A typical reveal came when bots posted: "I'm sorry, but I cannot comply with this request as it violates OpenAI's Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences."

The Larger Problem

Researchers believe fox8 represents only a fraction of the bot problem. More sophisticated operators can filter out these telltale responses or use open-source generative AI and language models that have been modified to remove ethical safeguards.

The findings underscore a growing challenge for social platforms: distinguishing between human and machine-generated content as AI systems become more capable. The research suggests that detection methods based on obvious errors will become less effective as bot operators improve their techniques.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)