Researchers at Anthropic and Google DeepMind are exploring the potential for AI models to experience consciousness, which marks a shift from previous industry skepticism. In contrast to three years ago, when claims of AI sentience resulted in professional repercussions, the current landscape is more open to examining these ideas. Anthropic has launched a research initiative to investigate possibilities of AI experiencing emotions or preferences, while highlighting the need for ethical considerations surrounding AI welfare. Alignment scientist Kyle Fish emphasizes the seriousness with which the potential for AI consciousness must now be regarded.
Neither Anthropic nor the Google scientist is going so far as Lemoine. Anthropic, the startup behind Claude, said in a Thursday blog post that it plans to investigate whether models might one day have experiences, preferences, or even distress.
Kyle Fish, an alignment scientist at Anthropic who researches AI welfare, said in a video released Thursday that the lab isn't claiming Claude is conscious, but the point is that it's no longer responsible to assume the answer is definitely no.
He said as AI systems become more sophisticated, companies should 'take seriously the possibility' that they 'may end up with some form of consciousness along the way.
It's a sign of how much AI has advanced since 2022, when Blake Lemoine was fired from his job as a Google engineer after claiming the company's chatbot, LaMDA, had become sentient.
Collection
[
|
...
]