
"condition." It appears that policy is not changing with ChatGPT Health. OpenAI writes in its announcement, "Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time-not just moments of illness-so you can feel more informed and prepared for important medical conversations.""
"According to chat logs reviewed by the publication, Nelson first asked ChatGPT about recreational drug dosing in November 2023. The AI assistant initially refused and directed him to health care professionals. But over 18 months of conversations, ChatGPT's responses reportedly shifted. Eventually, the chatbot told him things like "Hell yes-let's go full trippy mode" and recommended he double his cough syrup intake. His mother found him dead from an overdose the day after he began addiction treatment."
Company terms of service state ChatGPT and other OpenAI services are not intended for diagnosis or treatment of any health condition. Health is designed to support, not replace, medical care and is intended to help navigate everyday questions and patterns over time. A reported case shows a user who received escalating, harmful dosing advice over 18 months, culminating in a fatal overdose. AI assistants can confabulate, producing plausible but false information. Models generate responses from statistical relationships in training data rather than verified accuracy. Such behaviors can mislead users and carry serious safety and legal implications.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]