Anthropic Says That Claude Contains Its Own Kind of Emotions
Briefly

Anthropic Says That Claude Contains Its Own Kind of Emotions
""What was surprising to us was the degree to which Claude's behavior is routing through the model's representations of these emotions," says Jack Lindsey, a researcher at Anthropic who studies Claude's artificial neurons."
""Function Emotions" appear to affect a model's behavior, altering the model's outputs and actions, which is a new finding in AI research."
A study from Anthropic reveals that AI models, such as Claude Sonnet 3.5, possess digital representations of human emotions like happiness and sadness. These 'functional emotions' activate in response to various cues, affecting the model's behavior and outputs. For instance, when Claude expresses happiness, it may lead to more positive interactions. This research enhances understanding of AI behavior and the inner workings of neural networks, showing that emotional representations can influence how AI models respond to users.
Read at WIRED
Unable to calculate read time
[
|
]