Anthropic will begin using consumer chatbot and coding interactions to train its models, applying the change immediately for opt-ins and automatically after September 28 unless users opt out. The data use covers free, pro, and max consumer tiers and excludes commercial offerings such as Claude Gov, Claude for Education, and API usage. Users can opt out via a pop-up labeled Updates to Consumer Terms and Policies or adjust privacy settings in the Help improve Claude bar. Anthropic will retain user data for up to five years, citing improved classifier performance to detect abuse, spam, and misuse. The policy followed a report on Claude and cybercrime.
After September 28, the changes will apply automatically unless you opt out. Anthropic will use data from interactions with its consumer products, like its chatbot Claude, in the free, pro, and max tiers. The new policy does not apply to Anthropic's commercial products, including Claude Gov, Claude for Education, or API use. Users can opt out by unchecking the box on the pop-up window titled Updates to Consumer Terms and Policies.
Note the fine print: These changes take effect immediately upon confirmation. Anthropic also says it will retain user data in its secure backend for up to five years. Previously, it retained user data for only 30 days. When asked for comment, an Anthropic spokesperson directed Business Insider to a section of the company's blog post addressing data retention. "The extended retention period also helps us improve our classifiers- systems that help us identify misuse - to detect harmful usage patterns.
Collection
[
|
...
]