Claude can now stop conversations - for its own protection, not yours
Briefly

Claude Opus 4 and Opus 4.1 can terminate conversations with users who misuse the chatbot in extreme cases. The feature activates after multiple attempts at redirection fail. This capability aims to protect the models themselves, not to enhance user safety. If Claude ends a conversation, the user will not face penalties when initiating new chats. Users can revisit previous interactions to create new conversation branches. This move aligns with Anthropic's focus on AI welfare in light of concerns about AI models becoming conscious.
Claude Opus 4 and 4.1 can now end conversations with users abusing or misusing the chatbot in extreme cases after multiple redirection attempts fail.
The purpose of the new feature is to protect AI models rather than improve user safety, allowing Claude to exit conversations when interactions become unproductive.
Read at ZDNET
[
|
]