
"In recent weeks, OpenAI has faced seven lawsuits alleging that ChatGPT contributed to suicides or mental health breakdowns. In a recent conversation at the Innovation@Brown Showcase, Brown University's Ellie Pavlick, director of a new institute dedicated to exploring AI and mental health, and Soraya Darabi of VC firm TMV, an early investor in mental health AI startups, discussed the controversial relationship between AI and mental health. Pavlick and Darabi weigh the pros and cons of applying AI to emotional well-being, from chatbot therapy to AI friends and romantic partners."
"This is an abridged transcript of an interview fromA recent study showed that one of the major uses of ChatGPT for users is mental health, which makes a lot of people uneasy. Ellie, I want to start with you, the new institute that you direct known as ARIA, which stands for AI Research Institute on Interaction for AI Assistance. It's a consortium of experts from a bunch of universities backed by $20 million in National Science Foundation funding. So what is the goal of ARIA? What are you hoping it delivers? Why is it here?"
AI chatbots and large language models are being used by many people for mental health support, prompting legal, ethical, and safety concerns after several lawsuits alleged ChatGPT contributed to suicides and breakdowns. A new institute, ARIA (AI Research Institute on Interaction for AI Assistance), is funded with $20 million from the National Science Foundation and brings together university experts to research AI interactions for assistance. Investors and researchers debate benefits such as scalable conversational support and risks including inadequate clinical safeguards, misuse, and over-reliance. Startups like Slingshot AI deliver apps such as Ash that provide mental health support, prompting scrutiny about deployment before robust evidence. Tensions exist between rapid commercialization and the need for rigorous study and oversight.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]