Highlights
Anthropic Data Policy Changes for Claude Users
Anthropic has implemented a significant change in its data policies, mandating that all Claude users decide by 28 September whether they wish their conversations to be included in the training of future AI models.
New Data Retention Policy
Previously, the company did not utilise consumer chat data for training purposes. With this new approach, Anthropic will keep user conversations and coding sessions for a period of up to five years, unless individuals choose to opt out. This policy impacts Claude Free, Pro, and Max users, which includes Claude Code, although business users with Claude Gov, Claude for Work, Claude for Education, or API access will not be affected.
Past Practices on Data Retention
In the past, consumer chat prompts and responses were automatically deleted after 30 days. If flagged for policy violations, data could be stored for up to two years.
User Choice and Model Improvement
Anthropic has framed this update as a choice for users, indicating that those who permit their data to be used for training will “contribute to improving model safety, allowing our systems to better detect harmful content and reducing the likelihood of flagging benign conversations.” Furthermore, the company noted that participating in this initiative could enhance future Claude models’ abilities in areas such as coding, analysis, and reasoning, ultimately benefiting all users.
Industry Context and Competition
However, industry analysts suggest that this change reflects Anthropic’s need for high-quality conversational data to maintain competitiveness against OpenAI and Google. Accessing millions of Claude interactions could provide the company with essential information for refining its systems.
Wider Trends in AI Data Policies
This alteration aligns with broader trends within the AI sector, where companies are increasingly scrutinised regarding data retention practices. For instance, OpenAI is currently contesting a court ruling that requires it to retain all ChatGPT consumer conversations indefinitely, including those that have been deleted. This lawsuit was initiated by The New York Times among other publishers. OpenAI’s COO Brad Lightcap has described the ruling as “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.”
Implementation Concerns
The way Anthropic is rolling out this update has also sparked concerns. Existing users are encountering a pop-up titled “Updates to Consumer Terms and Policies,” featuring a prominent “Accept” button. Beneath this is a considerably smaller toggle that controls training permissions, which are set to “On” by default. The Verge has highlighted that this design may lead users to click quickly without fully comprehending that they are consenting to data sharing.
Privacy and Consent Issues
Privacy specialists caution that the intricacies of AI systems render meaningful user consent nearly impossible. The US Federal Trade Commission has previously warned that AI companies could face scrutiny if they implement policy changes using “legalese, fine print, or buried hyperlinks” that obscure the real impact of these updates.
It remains uncertain whether the FTC plans to take action in Anthropic’s situation.
