Anthropic is changing course. Starting this month, your Claude chats may help train its AI unless you tell it not to.
Anthropic expands data use in new policy update
On August 31, Anthropic updated its Consumer Terms and Privacy Policy, confirming that user conversations may now be used to train Claude. This marks a major shift. Until now, Anthropic claimed it didn’t use customer chats for training unless users opted in through feedback tools.
Now, that stance is softening. A new pop-up informs existing users of the policy change and offers a checkbox: leave it ticked, and your chats will help improve Claude. Uncheck it, and your data stays out of the training pipeline.
Users have until September 28 to make their choice
If you’re already using Claude, you’ll have until September 28, 2025, to accept or decline the updated terms. After that, continued use counts as agreement.
New users will see the data consent option during sign-up. And anyone can later revisit their choice via:
- Settings > Privacy > Help Improve Claude
- Unchecking this box stops future chats from being used
- Deleting a chat ensures it won’t be used for training
- Opt-outs are respected, but metadata is held for 30 days
Anthropic says this change helps it build smarter, more useful models while boosting protections against abuse and fraud.
The new updates don’t apply to all users.
The shift applies to Claude Free, Pro, and Max subscribers. But if you’re using Claude for Work or Claude for Education, you’re covered under separate commercial terms. The same goes for developers using Claude through the API, or via Amazon Bedrock and Google Cloud’s Vertex AI platforms; those channels are exempt from this policy.
Data collection raises questions about Anthropic’s privacy stance
The company built its brand on privacy, often distancing itself from rivals like OpenAI. This update challenges that narrative. Still, Anthropic insists it won’t share user data with third parties and will filter sensitive content using a mix of automated tools and internal safeguards.
Training data will now be stored for up to five years. However, deleted chats won’t be used at all.
Anthropic wants your chats, but on its terms
For a company that prided itself on privacy, this update is a gamble. Anthropic says it’s about building better AI, but for users, the message is clear: speak carefully, or speak elsewhere.
{{user}} {{datetime}}
{{text}}