ShiftDelete.Net Global

Anthropic begins using chats to train Claude unless you opt out

Ana sayfa / AI

Anthropic is changing course. Starting this month, your Claude chats may help train its AI unless you tell it not to.

On August 31, Anthropic updated its Consumer Terms and Privacy Policy, confirming that user conversations may now be used to train Claude. This marks a major shift. Until now, Anthropic claimed it didn’t use customer chats for training unless users opted in through feedback tools.

Now, that stance is softening. A new pop-up informs existing users of the policy change and offers a checkbox: leave it ticked, and your chats will help improve Claude. Uncheck it, and your data stays out of the training pipeline.

Nvidia’s record quarter leans heavily on just two customers

Nvidia posted record Q2 revenue, but 39% came from just two customers—raising questions about long-term risk and reliance.

If you’re already using Claude, you’ll have until September 28, 2025, to accept or decline the updated terms. After that, continued use counts as agreement.

New users will see the data consent option during sign-up. And anyone can later revisit their choice via:

Anthropic says this change helps it build smarter, more useful models while boosting protections against abuse and fraud.

The shift applies to Claude Free, Pro, and Max subscribers. But if you’re using Claude for Work or Claude for Education, you’re covered under separate commercial terms. The same goes for developers using Claude through the API, or via Amazon Bedrock and Google Cloud’s Vertex AI platforms; those channels are exempt from this policy.

The company built its brand on privacy, often distancing itself from rivals like OpenAI. This update challenges that narrative. Still, Anthropic insists it won’t share user data with third parties and will filter sensitive content using a mix of automated tools and internal safeguards.

Training data will now be stored for up to five years. However, deleted chats won’t be used at all.

For a company that prided itself on privacy, this update is a gamble. Anthropic says it’s about building better AI, but for users, the message is clear: speak carefully, or speak elsewhere.

Yorum Ekleyin