• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Terms and Conditions
Friday, April 24, 2026
  • Login
  • Register
StartupSuperb
  • NewsLatest
    • Trending
    • International Insights
    • Reports
  • Funding FlowJust In
  • Artificial Intelligence
  • Tech
  • Marketing
  • Resources
    • Books
  • Shark Tank
    • Shark Tank India
  • Startup Stories
    • Founder Fridays
    • Superb Shepreneurs
No Result
View All Result
  • NewsLatest
    • Trending
    • International Insights
    • Reports
  • Funding FlowJust In
  • Artificial Intelligence
  • Tech
  • Marketing
  • Resources
    • Books
  • Shark Tank
    • Shark Tank India
  • Startup Stories
    • Founder Fridays
    • Superb Shepreneurs
No Result
View All Result
StartupSuperb
No Result
View All Result
  • News
  • Funding Flow
  • Artificial Intelligence
  • Tech
  • Marketing
  • Insights
  • Resources
  • Shark Tank
  • Startup Stories
  • Social Superb
ADVERTISEMENT
Home Artificial Intelligence

Anthropic Claims Chinese Competitors Are Leveraging Claude for AI Development

Akash Das by Akash Das
February 24, 2026
in Artificial Intelligence, Tech
Reading Time: 6 mins read
0
A A
0
Anthropic Claims Chinese Competitors Are Leveraging Claude for AI Development
ADVERTISEMENT
Share on LinkedInShare on FacebookShare on X.comSend on TelegramSend on WhatsApp



Anthropic’s Allegations Against Chinese AI Firms

Highlights

  • 1 Anthropic’s Allegations Against Chinese AI Firms
    • 1.1 Methods of Distillation in AI Training
      • 1.1.1 Details of the Allegations
    • 1.2 Specific Accusations Against DeepSeek, Moonshot AI, and MiniMax
      • 1.2.1 The Implications of These Actions
    • 1.3 Call for Industry Action

Anthropic’s Allegations Against Chinese AI Firms

Anthropic, widely recognised for its Claude chatbot, has accused several Chinese AI companies, including DeepSeek, Moonshot AI, and MiniMax, of improperly using outputs from its system to enhance their own models. Allegedly, these firms have set up over 24,000 deceptive accounts and engaged in more than 16 million interactions with Claude, raising serious concerns regarding potential breaches.

ADVERTISEMENT

Methods of Distillation in AI Training

In a press statement, Anthropic indicated that the companies utilised a technique called distillation, where a less advanced model learns from the outputs of a superior one. Although this method is commonplace within various organisations, it can be exploited by competitors seeking to gain advantages from other laboratories, as stated by Anthropic. This approach not only facilitates their model training but also conserves resources needed for developing independent capabilities.

Details of the Allegations

Anthropic has outlined significant instances of industrial-scale distillation attacks on its models by DeepSeek, Moonshot AI, and MiniMax. These entities have reportedly established over 24,000 fake accounts, accumulating more than 16 million interactions with Claude, thereby extracting valuable insights to enhance their own models.

We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.

— Anthropic (@AnthropicAI) February 23, 2026

Specific Accusations Against DeepSeek, Moonshot AI, and MiniMax

DeepSeek, recently in the news for its R1 model, is accused of interacting with the Claude chatbot over 150,000 times to gather its responses, according to data from Anthropic. Meanwhile, Moonshot and MiniMax are said to have had more than 3.4 million and 13 million exchanges with Claude, respectively. Anthropic revealed that it traced the campaign through request metadata that matched public profiles of senior staff at Moonshot. In subsequent phases, Moonshot reportedly adopted a more focused strategy to extract and reconstruct Claude’s reasoning processes.

The Implications of These Actions

This investigation indicates that companies extensively tested Claude to understand its cognitive processes, which could enable them to develop enhanced AI models. These claims arise amid ongoing discussions in the United States about tightening regulations on the export of advanced AI chips to China.

Call for Industry Action

Anthropic is advocating for a unified response from the AI community, including cloud service providers and policymakers, to bolster resilience against distillation attacks. The organization is also committed to investing significantly in creating defences that make such attacks more challenging to execute and easier to detect.

In a prior instance, OpenAI accused DeepSeek of training its AI using distillation techniques on US models. A memo sent to the U.S. House Select Committee on Strategic Competition revealed that OpenAI claimed certain users associated with DeepSeek attempted to access models through third-party routers and procure outputs intended for distillation.


Tags: AI
ShareShareTweetShareSend
ADVERTISEMENT
Akash Das

Akash Das

Hi, I’m Akash, an entrepreneur, tech enthusiast, digital marketer, and content creator on a mission to inspire innovation and drive transformation through technology and creativity.My expertise extends to digital marketing, where I craft data-driven strategies for SEO, social media, and branding to empower businesses and creators to grow their online presence. Alongside my entrepreneurial journey, I share my insights and discoveries through engaging blogs, tutorials, and YouTube content.

Related Posts

IIT Kharagpur Launches Cutting-Edge AI Mining Research Center with ₹15 Crore Investment from Vikram Sodhi

IIT Kharagpur Launches Cutting-Edge AI Mining Research Center with ₹15 Crore Investment from Vikram Sodhi

April 24, 2026
1
“Rethinking Risks: Nirmala Sitharaman Highlights Anthropic Mythos AI’s Threat to Banking”

“Rethinking Risks: Nirmala Sitharaman Highlights Anthropic Mythos AI’s Threat to Banking”

April 24, 2026
1
Elon Musk Unveils First Look at Cybercab Production, Accelerating Robotaxi Revolution

Elon Musk Unveils First Look at Cybercab Production, Accelerating Robotaxi Revolution

April 24, 2026
0
Meta’s 2026 Layoffs: Generous Severance for 8,000 Affected Employees

Meta’s 2026 Layoffs: Generous Severance for 8,000 Affected Employees

April 24, 2026
0
“Unlocking New Horizons: OpenAI’s GPT 5.5 Challenges Anthropic’s Mythos with Enhanced Autonomy”

“Unlocking New Horizons: OpenAI’s GPT 5.5 Challenges Anthropic’s Mythos with Enhanced Autonomy”

April 24, 2026
3
Unified Access: Meta Introduces a Single Account for Facebook, Instagram, and More

Unified Access: Meta Introduces a Single Account for Facebook, Instagram, and More

April 24, 2026
0

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

ADVERTISEMENT
StartupSuperb

©️ All rights reserved startupsuperb

Navigate Site

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Terms and Conditions

Follow Us

Welcome Back!

Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Sign Up with Google
Sign Up with Linked In
OR

Fill the forms bellow to register

*By registering into our website, you agree to the Terms & Conditions and Privacy Policy.
All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • News
    • Exclusive
    • International Insights
    • Reports
  • Funding Flow
  • Artificial Intelligence
  • Tech
  • Marketing
  • Insights
  • Resources
    • Books
  • Shark Tank
    • Shark Tank India
  • Startup Stories
    • Founder Fridays
    • Superb Shepreneurs
  • Social Superb

©️ All rights reserved startupsuperb

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version