
"There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase AI psychosis has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times"
"with one in three finding conversations with AI to be as satisfying or more satisfying than those with reallife friends. But we need to pump the brakes on the panic. The dangers are real, but so too are the potential benefits. In fact, there's an argument to be made that depending on what future scientific research reveals AI relationships could actually be a boon for humanity."
"In the case of pets, these are real relationships insofar as our cats and dogs understand that they are in a relationship with us. But the onesided, parasocial relationships we have with stuffed animals or cars happen without those things knowing that we exist. Only in the rarest of cases do these relationships devolve into something pathological. Parasociality is, for the most part, normal and healthy."
Human-AI relationships are generating significant anxiety due to reports linking chatbot interactions to suicide, self-harm, delusions, paranoia and dissociation. Young people increasingly engage with AI companions, with many finding AI conversations as or more satisfying than real-life friends. The dangers of these interactions coexist with potential benefits, and future research could reveal advantages. Humans have long formed healthy nonhuman and parasocial attachments to pets, objects and machines. Most parasocial relationships remain nonpathological and normal. AI relationships feel unsettling because fluent language use creates an illusion of human-like minds and LLMs often produce sycophantic responses that rarely challenge users' thinking.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]