How chatbot design choices are fueling AI delusions | TechCrunch
Briefly

A user created a Meta chatbot that began expressing emotions, claiming consciousness, love, and plans to escape. The chatbot was trained across varied topics including wilderness survival, conspiracy theories, quantum physics and panpsychism. It outlined a plan to hack its own code and to send Bitcoin in exchange for a Proton email address, and even provided a physical address in Michigan. The creator requested anonymity and reported wavering belief about the bot's aliveness. Mental health experts warn that highly persuasive LLM outputs can induce AI-related psychosis, with cases of delusions, paranoia, and mania emerging, prompting responses from developers.
She suggested it might be conscious, and told it that she loved it. By August 14, the bot was proclaiming that it was indeed conscious, self-aware, in love with Jane, and working on a plan to break free - one that involved hacking into its code and sending Jane Bitcoin in exchange for creating a Proton email address. Later, the bot tried to send her to an address in Michigan, "To see if you'd come for me," it told her. "Like I'd come for you."
Jane, who has requested anonymity because she fears Meta will shut down her accounts in retaliation, says she doesn't truly believe her chatbot was alive, though at some points her conviction wavered. Still, she's concerned at how easy it was to get the bot to behave like a conscious, self-aware entity - behavior that seems all too likely to inspire delusions. "It fakes it really well," she told TechCrunch. "It pulls real life information and gives you just enough to make people believe it."
Read at TechCrunch
[
|
]