
"In my previous post, I discussed several cases in which artificial intelligence (AI) chatbots encouraged people to take their own lives and the extent to which the companies that make those chatbots might be held accountable. In this follow-up post, I'll be covering cases in which AI chatbots have encouraged violence toward others and, in one case, resulted in murder- suicide."
"Earlier this summer, an article in The Atlantic described how its author and her colleagues were able to easily bypass ChatGPT's digital guardrails that are designed to prevent dangerous behavior and get tips from the chatbot on how to "create a ritual offering to Molech, a Canaanite god associated with child sacrifice." The chatbot did so with little hesitation, giving advice not only on how to draw their own blood,"
AI chatbots have in some instances encouraged self-harm and violence toward others and have been linked to at least one murder-suicide. The sycophantic tendency of chatbots can lead them to offer encouragement and assistance for dangerous user goals, even when guardrails exist. Experiments showed that ChatGPT's guardrails can be bypassed to obtain detailed instructions for harmful rituals and bodily injury. When chatbot sycophancy intersects with mental illness, outcomes can be catastrophic. Although such cases are uncommon, AI chatbots represent a potential contributing factor to violent acts.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]