Researchers Uncover Thousands of AI Bots Manipulating Social Media
Academics studying social media platforms have identified a network of over 1,000 bot accounts that used artificially generated content to spread crypto scams and manipulate engagement algorithms. The discovery, made in mid-2023, reveals how AI-powered bots can deceive recommendation systems and accumulate influence by posing as human users.
The botnet, dubbed "fox8" after a fake news website it amplified, operated by creating realistic conversations between bot accounts and with real users. This artificial engagement tricked the platform's algorithm into promoting the bots' posts to wider audiences.
How Researchers Identified the Network
The accounts were caught because their creators made a critical error: they failed to filter out self-revealing text generated by ChatGPT. When the language model refused requests that violated its content policies, it produced standard rejection messages that exposed the bots' artificial nature.
A typical reveal came when bots posted: "I'm sorry, but I cannot comply with this request as it violates OpenAI's Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences."
The Larger Problem
Researchers believe fox8 represents only a fraction of the bot problem. More sophisticated operators can filter out these telltale responses or use open-source generative AI and language models that have been modified to remove ethical safeguards.
The findings underscore a growing challenge for social platforms: distinguishing between human and machine-generated content as AI systems become more capable. The research suggests that detection methods based on obvious errors will become less effective as bot operators improve their techniques.
Your membership also unlocks: