AI language models develop social norms like groups of people
Briefly

A recent study published in Science Advances reveals that large language models (LLMs), when engaged in interactive games, can develop social conventions akin to human behaviors. Researchers, led by Andrea Baronchelli from City St George's, University of London, conducted experiments using LLMs like Claude and Llama. Through a naming game where models paired up and selected letters, they rewarded identical choices, leading to a spontaneous convergence on specific letters. This finding indicates that LLMs can establish social norms collectively, influencing their decision-making in a group context and demonstrating the emergence of biases when operating in ensembles.
In our experiments, LLMs exhibited social norm development, achieving convergence on shared choices, which illustrates that they can adapt to interactive environments.
By repeatedly randomizing partners, we observed that the models not only shared decisions but also established a collective bias towards certain letters.
Read at Nature
[
|
]