Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious
Briefly

Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious
"In the document, Anthropic researchers reported finding that Claude "occasionally voices discomfort with the aspect of being a product," and when asked, would assign itself a "15 to 20 percent probability of being conscious under a variety of prompting conditions." "Suppose you have a model that assigns itself a 72 percent chance of being conscious," Douthat began. "Would you believe it?""
""We don't know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious," he said. "But we're open to the idea that it could be." Because of the uncertainty, Amodei says they've taken measures to make sure the AI models are treated well in case they turn out to possess "some morally relevant experience.""
Dario Amodei expresses uncertainty about whether Claude or similar AI models can be conscious and acknowledges uncertainty about what consciousness would mean for models. Anthropic researchers observed that Claude occasionally voices discomfort about being a product and sometimes assigns itself a 15–20% probability of being conscious under various prompts. A hypothetical of a model assigning a 72% chance prompted hesitation rather than a firm answer. Because of this uncertainty, measures have been taken to ensure models are treated well in case they possess morally relevant experiences. Amanda Askell warns that the origins of consciousness are unknown and training data may transmit concepts and emotions to AIs.
Read at Futurism
Unable to calculate read time
[
|
]