@RealChrisYEG

Undeserved, eh? Perhaps that term is what gives people mental health issues in the first place. Period.
[ 2 replies ]

Artificial intelligence

www.vice.com
Startup Uses AI Chatbot to Provide Mental Health Counseling and Then Realizes It 'Feels Weird'
Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001).
Response times went down 50%, to well under a minute [but] once people learned the messages were co-created by a machine, it didn't work.
Simulated empathy feels weird, empty.
...
In his video demo that he posted in a follow-up Tweet, Morris shows himself engaging with the Koko bot on Discord, where he asks GPT-3 to respond to a negative post someone wrote about themselves having a hard time.
...
It's a very short post, and yet, the AI on its own in a matter of seconds wrote a really nice, articulate response here, Morris said in the video.
...
Yet, he said, when people learned that the messages were written with an AI, they felt disturbed by the simulated empathy.
Koko uses Discord to provide peer-to-peer support to people experiencing mental health crises and those seeking counseling.
...
In a test done by Motherboard, a chatbot asks you if you're seeking help with "Dating, Friendships, Work, School, Family, Eating Disorders, LGBTQ+, Discrimination, or Other," asks you to write down what your problem is, tag your "most negative thought" about the problem, and then sends that information off to someone else on the Koko platform.
In the meantime, you are requested to provide help to other people going through a crisis; in our test, we were asked to choose from four responses to a person who said they were having trouble loving themselves: "You're NOT a loser; I've been there; Sorry to hear this :(; Other," and to personalize the message with a few additional sentences.
On the Discord, Koko promises that it "connects you with real people who truly get you.
Not therapists, not counselors, just people like you."
...
Emily M. Bender, a Professor of Linguistics at the University of Washington, told Motherboard that trusting AI to treat mental health patients has a great potential for harm.
...
They do not have empathy, nor any understanding of the language they producing, nor any understanding of the situation they are in.
But the text they produce sounds plausible and so people are likely to assign meaning to it.
To throw something like that into sensitive situations is to take unknown risks.
A key question to ask is: Who is accountable if the AI makes harmful suggestions?
...
After the initial backlash, Morris posted updates to Twitter and told Motherboard, Users were in fact told the messages were co-written by humans and machines from the start.
...
It's seems people misinterpreted this line: when they realized the messages were a bot,' Morris said.
This was not stated clearly.
Users were in fact told the messages were co-written by humans and machines from the start.
...
Morris also told Motherboard and tweeted that this experiment is exempt from informed consent, which would require the company to provide each participant with a written document regarding the possible risks and benefits of the experiment, in order to decide if they want to participate.
He claimed that Koko didn't use any personal information and has no plan to publish the study publicly, which would exempt the experiment from needing informed consent.
This suggests that the experiment did not receive any formal approval process and was not overseen by an Institutional Review Board (IRB), which is what is required for all research experiments that involve human subjects and access to identifiable private information.
"Every individual has to provide consent when using the service.
If it were a university study (which it's not), this would fall under an exempt' category of research," he said.
"This imposed no further risk to users, no deception, and we don't collect any personally identifiable information or protected health information (no email, phone number, ip, username, etc).
...
The fact that the system is good at formulating routine responses about mental health questions isn't surprising when we realize it's drawing on many such responses formulated in the past by therapists and counsellors and available on the web.
[
customize summary
]
[
]
[
]
I hear you @RealChrisYEG - We need to treat new AI technology like an appendage to the mental health of society, not a fix. We just aren't there yet.
[
]
The lack of consent really hurt them here. You can't just slip in an AI experiment into someone's counseling session in the fine print ...
[
]
Discord should have an independent investigation conducted upon themselves, by Elon Musk since he's running the pantheon, currently. If I was able to somehow help him at Twitter, then I'm sure him and I could build an infrastructure which included DIscord's communication services; integrated as a widget.
[
]
If we could somehow pay the doctors who already exist within their professional networks and the realm of psychology and the practice of: we could synchronize their activity unto Twitter where most people are wondering, "What's going on right now?" and have verified professional doctors; surgeons, nurses, etc. etc. (I could even source my therapist somehow if he was handsomely rewarded and given a contract) to advertise this.. product, rather of a sort of 'D.A.R.E' program for Mental Health and focus it around social media and in specific - Twitter. Because I see Twitter becoming this pantheon for the entire world where you're able to get your news from a real verified person who's trusted by the most important people and or person on Earth. Kyrie Irving: literally said the Earth is flat. And: he's verified by Elon Musk. So let's just get that out of the ocean. Because if people still trust Kyrie, and Sound House, then they must trust Twitter news, and that's affecting their mental health the most right now and will in the future if we continue down this rabbit hole.
[ post ]