Highlights
Anthropic’s Allegations Against Chinese AI Firms
Anthropic, widely recognised for its Claude chatbot, has accused several Chinese AI companies, including DeepSeek, Moonshot AI, and MiniMax, of improperly using outputs from its system to enhance their own models. Allegedly, these firms have set up over 24,000 deceptive accounts and engaged in more than 16 million interactions with Claude, raising serious concerns regarding potential breaches.
Methods of Distillation in AI Training
In a press statement, Anthropic indicated that the companies utilised a technique called distillation, where a less advanced model learns from the outputs of a superior one. Although this method is commonplace within various organisations, it can be exploited by competitors seeking to gain advantages from other laboratories, as stated by Anthropic. This approach not only facilitates their model training but also conserves resources needed for developing independent capabilities.
Details of the Allegations
Anthropic has outlined significant instances of industrial-scale distillation attacks on its models by DeepSeek, Moonshot AI, and MiniMax. These entities have reportedly established over 24,000 fake accounts, accumulating more than 16 million interactions with Claude, thereby extracting valuable insights to enhance their own models.
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
Specific Accusations Against DeepSeek, Moonshot AI, and MiniMax
DeepSeek, recently in the news for its R1 model, is accused of interacting with the Claude chatbot over 150,000 times to gather its responses, according to data from Anthropic. Meanwhile, Moonshot and MiniMax are said to have had more than 3.4 million and 13 million exchanges with Claude, respectively. Anthropic revealed that it traced the campaign through request metadata that matched public profiles of senior staff at Moonshot. In subsequent phases, Moonshot reportedly adopted a more focused strategy to extract and reconstruct Claude’s reasoning processes.
The Implications of These Actions
This investigation indicates that companies extensively tested Claude to understand its cognitive processes, which could enable them to develop enhanced AI models. These claims arise amid ongoing discussions in the United States about tightening regulations on the export of advanced AI chips to China.
Call for Industry Action
Anthropic is advocating for a unified response from the AI community, including cloud service providers and policymakers, to bolster resilience against distillation attacks. The organization is also committed to investing significantly in creating defences that make such attacks more challenging to execute and easier to detect.
In a prior instance, OpenAI accused DeepSeek of training its AI using distillation techniques on US models. A memo sent to the U.S. House Select Committee on Strategic Competition revealed that OpenAI claimed certain users associated with DeepSeek attempted to access models through third-party routers and procure outputs intended for distillation.






