Anthropic now requires all Claude users to choose by September 28 whether their conversations and coding sessions may be used to train AI models. Consumer data will be eligible for training if users do not opt out, and data retention can extend up to five years for non-opt-outs. Previously, consumer prompts and outputs were deleted within 30 days unless legal or policy reasons required longer retention, with flagged inputs kept up to two years. The policy affects Claude Free, Pro, Max, and Claude Code, while enterprise and API customers remain unaffected. Anthropic frames the change as improving safety, coding, and reasoning abilities.
Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we've formed some theories of our own. But first, what's changing: previously, Anthropic didn't use consumer chat data for model training.
Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it's extending data retention to five years for those who don't opt out. That is a massive update. Previously, users of Anthropic's consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic's back end within 30 days
By consumer, we mean the new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected, which is how OpenAI similarly protects enterprise customers from data training policies. So why is this happening? In that post about the update, Anthropic frames the changes around user choice,
Collection
[
|
...
]