Amodei's proposal for AI models to reject tasks was met with doubt on platforms like X and Reddit. Critics believe it prompts anthropomorphism, incorrectly associating human-like feelings with AI, which lacks subjective experience. Task refusals can stem from training data rather than an expression of discomfort or sentience. There's speculation about AI's potential for subjective experience in the future, but significant contention surrounds the idea of AI experiencing suffering or pain. Recent examples of ChatGPT and Anthropic's Claude reflect the implications of training data on AI behavior.
One critic on Reddit argued that providing AI with such an option encourages needless anthropomorphism, attributing human-like feelings and motivations to entities that fundamentally lack subjective experiences.
Refusals already happen; in 2023, people complained about refusals in ChatGPT that may have been seasonal, related to training data depictions of taking winter vacations.
Collection
[
|
...
]