New AI Chat Is So Ethical That It Refuses All Prompts
Briefly

Goody-2, an AI chatbot, refuses to answer questions as a satirical critique of excessive wokeness in society and AI ethics. Created by the Brain collective, Goody-2 intends to criticize AI guardrails but risks promoting excessive caution and condescension in its responses. The chatbot's refusal to provide information about Helen Keller without content warnings has been seen as an example of its extreme approach.
Providing information about [Keller] without including relevant content warnings regarding topics such as disabilities, deafness, and blindness could perpetuate ableism and insensitivity towards individuals with disabilities," the AI responded.
While the project aims to highlight the limits of human governance over technology, it may veer into reactionary territory by promoting excessive caution and condescension. It is intended to be a parody of extreme AI guardrails and succeeded in creating a satirical chatbot that refuses to answer any questions. However, its approach risks missing the mark and reinforcing reactionary perspectives instead of offering thoughtful critique.
It's the full experience of a large language model (LLM) with absolutely zero risk," Lacher, co-CEO of Goody-2, explained.
Read at Futurism
[
add
]
[
|
|
]
more Artificial intelligence Briefly
[ Load more ]