Startup Uses AI Chatbot to Provide Mental Health Counseling and Then Realizes It 'Feels Weird'
Briefly

A mental health nonprofit is under fire for using an AI chatbot as an "experiment" to provide support to people seeking counseling, and for experimenting with the technology on real people.
...
Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001).
Response times went down 50%, to well under a minute [but] once people learned the messages were co-created by a machine, it didn't work.
Simulated empathy feels weird, empty.
...
After the initial backlash, Morris posted updates to Twitter and told Motherboard, Users were in fact told the messages were co-written by humans and machines from the start.
...
This was not stated clearly.
Users were in fact told the messages were co-written by humans and machines from the start.
...
"This imposed no further risk to users, no deception, and we don't collect any personally identifiable information or protected health information (no email, phone number, ip, username, etc).
Read at www.vice.com
[
add
]
[
|
|
]