Anthropic has revealed an unprecedented AI espionage operation. According to the company behind Claude, three Chinese AI labs — DeepSeek, Moonshot AI, and MiniMax — ran industrial-scale "distillation" campaigns, using over 24,000 fraudulent accounts and generating more than 16 million exchanges with Claude to copy its capabilities and improve their own models.
What is model distillation and why it matters
Model distillation is a technique where an advanced AI model (in this case Claude) is used to generate outputs that are then used as training data for another model. Think of it as copying answers from an exam, but at industrial scale.
What makes this particularly serious is the sophistication of the operation: the attackers created networks called "hydra clusters" — massive pools of accounts that spread traffic across Anthropic's API and third-party cloud services. A single proxy setup controlled over 20,000 fake accounts simultaneously, mixing extraction traffic with ordinary requests to avoid detection.
The numbers by company
Anthropic detailed each company's involvement:
- MiniMax: the most aggressive with over 13 million exchanges
- Moonshot AI: executed over 3.4 million queries
- DeepSeek: conducted over 150,000 interactions (the lowest number but the most well-known brand)
In total, the three companies generated 16 million exchanges specifically designed to extract Claude's knowledge.
The context: US vs China tech war
This accusation comes at a moment of peak tension between the United States and China in the AI sector. The US government has already imposed restrictions on advanced chip exports to China, and these revelations could further tighten sanctions.
Anthropic is not the first to flag this: OpenAI has also identified similar distillation campaigns by Chinese companies against GPT-4. The pattern suggests a coordinated strategy to close the technology gap by using Western models as shortcuts.
How this affects you as a user
If you use Claude, ChatGPT, or other AI assistants, this situation has direct implications:
- Data security: while Anthropic says user data was not compromised, the fraudulent accounts did access the same service
- Potential restrictions: AI companies may implement stricter controls that affect the user experience
- Pricing: the cost of fighting these abuses could be passed on to subscription prices
What comes next
Anthropic has strengthened its detection systems and is working with authorities. The question now is whether there will be real legal consequences or if these accusations will remain as just another chapter in the tech rivalry between superpowers. What's clear is that the AI race is no longer just about who has the best model, but about who best protects their intellectual property.