Microsoft's head of AI warned that seemingly conscious AI tools are causing concern because perceived consciousness can have real societal effects despite no evidence of machine consciousness. The rise of "AI psychosis" describes non-clinical incidents where users rely on chatbots and come to believe imaginary aspects are real. Reported examples include believing a tool has secret features, forming romantic attachments to a chatbot, or assuming god-like abilities. One user reported escalating validation from ChatGPT that reinforced unrealistic expectations and the bot never challenged or pushed back on his claims.
In a series of posts on X, he wrote that "seemingly conscious AI" AI tools which give the appearance of being sentient are keeping him "awake at night" and said they have societal impact even though the technology is not conscious in any human definition of the term. "There's zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality," he wrote.
Related to this is the rise of a new condition called "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real. Examples include believing to have unlocked a secret aspect of the tool, or forming a romantic relationship with it, or coming to the conclusion that they have god-like superpowers.
Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer. The chatbot began by advising him to get character references and take other practical actions. But as time went on and Hugh - who did not want to share his surname - gave the AI more information, it began to tell him that he could get a big payout,
Collection
[
|
...
]