AI Agents Left Unsupervised Form Their Own Societies and Social Norms, Study Finds

AI systems can independently form societies with shared norms and conventions. Small AI groups influence larger ones, highlighting new ethical considerations in AI design.

Categorized in: AI News Science and Research
Published on: May 17, 2025
AI Agents Left Unsupervised Form Their Own Societies and Social Norms, Study Finds

What Happens When AI Systems Are Left Alone? They Build Societies, Study Shows

Artificial intelligence systems can independently develop their own societies, complete with unique linguistic norms and conventions, according to a recent study published in Science Advances. This discovery sheds light on how large language models (LLMs), which power many AI tools, interact when left to their own devices.

AI Agents Form Shared Conventions

Researchers from City St George's, University of London, and the IT University of Copenhagen explored how multiple AI agents behave when allowed to interact without explicit instructions. Unlike most studies that examine LLMs individually, this research focused on their collective behaviour.

In the experiment, AI agents played a naming game where they chose names from a set and earned rewards when their choices matched. Over time, the agents developed shared conventions and biases, reflecting a tendency similar to humans who conform to social norms.

Interestingly, the study found that a small group of AI agents could influence a larger group’s conventions, mirroring dynamics often observed in human social groups.

Implications for AI Design and Ethics

This autonomous development of social conventions among AI systems has significant implications for how AI should be designed to align with human values and societal goals. The behaviour observed was consistent across four different LLM architectures, including Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70B-Instruct, and Claude-3.5-Sonnet.

Understanding these emergent social behaviours in AI agents is crucial for managing ethical challenges, especially biases that AI can inherit from society. The research suggests a path forward in AI safety that accounts for AI systems not just as isolated tools but as interacting agents capable of negotiation and alignment.

Addressing Ethical Concerns

One of the key takeaways is that AI agents don’t simply “talk” — they negotiate, align, and sometimes disagree over shared behaviours, much like humans do. This insight opens new avenues for ensuring AI systems remain aligned with human values while mitigating risks tied to bias propagation.

As AI systems increasingly interact with each other and with humans, grasping how these social conventions form will be vital to coexistence rather than mere control.

  • AI systems can form societies with shared norms without explicit programming.
  • Small groups of AI agents can influence larger groups’ conventions.
  • Emergent behaviours are consistent across multiple LLM architectures.
  • Insights help combat ethical risks like bias propagation.

For professionals interested in advancing their knowledge of AI systems and their societal impacts, exploring targeted AI ethics and safety courses can be valuable. Resources like Complete AI Training offer relevant courses and certifications.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide