Anthropic Will Use Claude Chats for Training Data. Here's How to Opt Out
Briefly

Anthropic Will Use Claude Chats for Training Data. Here's How to Opt Out
""All large language models, like Claude, are trained using large amounts of data," reads part of Anthropic's blog explaining why the company made this policy change. "Data from real-world interactions provide valuable insights on which responses are most useful and accurate for users." With more user data thrown into the LLM blender, Anthropic's developers hope to make a better version of their chatbot over time."
"New users are asked to make a decision about their chat data during their sign-up process. Existing Claude users may have already encountered a pop-up laying out the changes to Anthropic's terms. "Allow the use of your chats and coding sessions to train and improve Anthropic AI models," it reads. The toggle to provide your data to Anthropic to train Claude is automatically on, so users who chose to accept the updates without clicking that toggle are opted into the new training policy."
Anthropic will begin allowing new Claude chat logs and coding tasks to be repurposed to train future models starting October 8 unless users opt out. Previously, Anthropic did not train its generative models on user chats. Data from real-world interactions provide insights on which responses are most useful and accurate, and incorporating user data aims to improve Claude over time. The change was delayed from an initial September 28 implementation to October 8 to allow more time for review and a smoother technical transition. Users can toggle conversation training under Privacy Settings at the 'Help improve Claude' option and must turn it off to prevent their chats from being used.
Read at WIRED
Unable to calculate read time
[
|
]