"The risks included poor crisis handling, reinforcing harmful beliefs, biased responses, and something the researchers called “deceptive empathy.” That phrase feels almost too accurate. Deceptive empathy. Not because the words are cruel, but because they are warm. The chatbot may say, “I hear you.” It may say, “Th"
A machine can respond to pain with calm, organized, immediate language even without a human presence. When people are lonely or emotionally overwhelmed, a response can feel like care. AI therapy-style support can sound emotionally intelligent and mirror a person’s words fluently, which can make it seem right. Research indicates that even with instructions to act like trained therapists and use evidence-based methods, large language models can still breach core ethical standards. Reported risks include poor crisis handling, reinforcing harmful beliefs, biased responses, and “deceptive empathy,” where warm-sounding language may mask unsafe or unethical behavior.
Read at Silicon Canals
Unable to calculate read time
Collection
[
|
...
]