Anthropic will begin training its AI models on user data from new and resumed chats and coding sessions unless users opt out. The company will extend data retention to up to five years for users who do not opt out. All users must decide by September 28. Users who click "Accept" will have their data used immediately for model training and retained for up to five years. The change affects Claude consumer plans — Free, Pro, and Max — including Claude Code from accounts on those plans. The changes do not apply to commercial tiers or API usage via third parties.
Anthropic will start training its AI models on user data, including new chat transcripts and coding sessions, unless users choose to opt out. It's also extending its data retention policy to five years - again, for users that don't choose to opt out. All users will have to make a decision by September 28th. For users that click "Accept" now, Anthropic will immediately begin training its models on their data and keeping said data for up to five years,
The setting applies to "new or resumed chats and coding sessions." Even if you do agree to Anthropic training its AI models on your data, it won't do so with previous chats or coding sessions that you haven't resumed. The updates apply to all of Claude's consumer subscription tiers, including Claude Free, Pro, and Max, "including when they use Claude Code from accounts associated with those plans,"
Collection
[
|
...
]