Authored by Bill Pan via The Epoch Times (emphasis ours),
Anthropic, the maker of the Claude chatbot, accused three of China’s leading artificial intelligence (AI) companies of creating more than 24,000 fraudulent accounts to tap into its system and train their own models.
The three companies—DeepSeek, Moonshot AI, and MiniMax—allegedly used those accounts to send more than 16 million prompts to Claude, siphoning off output to refine their own products, Anthropic said in a Feb. 23 blog post.
“These campaigns are growing in intensity and sophistication,” the San Francisco-based company said.
The tactic, known as “distillation,” involves training a smaller, less powerful “student” model on the outputs, behavior, and knowledge of a much larger, more advanced “teacher” model. This allows the student system to imitate the teacher’s capabilities without the time and money required to develop them independently.
Anthropic said the scale of the three companies’ alleged distillation activities varied. DeepSeek alone generated about 150,000 interactions with Claude, while Moonshot and MiniMax logged more than 3.4 million and 13 million, respectively, according to Anthropic.
Since many China-based models such as DeepSeek’s R1 do not charge a monthly subscription fee, widespread distillation could make it harder for American providers, such as OpenAI and Anthropic, to monetize products they have spent billions of dollars to build and maintain. That imbalance, the company said, risks eroding the United States’ competitive advantage in AI that export controls are designed to preserve.
Anthropic, which emphasizes its focus on AI safety, further warned that it and other U.S. companies build safeguards to prevent bad actors from using AI to, for example, develop biological weapons or carry out cyber attacks. Illicitly distilled models, by contrast, may lack such guardrails.
“Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” the company warned.
If distilled models are later open-sourced, it added, the risk multiplies as those capabilities “spread freely beyond any single government’s control.”
DeepSeek, Moonshot, and MiniMax did not respond to requests for comment by publication time.
DeepSeek leaped into the top ranks of AI makers last year with the release of its R1 chatbot, which it says was built at a fraction of the cost of leading U.S. alternatives. The launch sparked a tech stock selloff of more than $1 trillion, as investors fretted that a low-cost made-in-China model could undercut Silicon Valley’s AI lead.
Since then, China-based firms have flooded the market with relatively affordable text, image, and video models. Moonshot last month released a new open source model, Kimi K2.5, and is seeking a valuation of about $10 billion in a new funding round, while MiniMax also made its public market debut at about $6.5 billion.
Anthropic alleged that the three firms used “fraudulent accounts and proxy services to access Claude at scale while evading detection.” Proxy networks can obscure a user’s true location and allow them to bypass regional restrictions to open large numbers of accounts.
The Claude maker said it identified the actors with “high confidence” based on internet protocol addresses, metadata, and “corroboration from industry partners who observed the same actors and behaviors on their platforms.” MiniMax, for instance, was seen in action as the company allegedly redirected nearly half its traffic to siphon capabilities from the latest Claude model when it was launched, Anthropic stated.
The allegations come as U.S. chip exports to China attract debate over national security concerns.
In January, the Trump administration published a new regulation that loosened restrictions on the export of Nvidia’s H200 chips, a move that federal officials said is justified to foster China’s reliance on lower-tier U.S. chips rather than the most advanced ones. Critics, however, say that any potential boost to China’s AI computing capacity is a risk too big to accept.
Anthropic, which has consistently called for tighter controls on advanced chips to China, did not explicitly blame the U.S. policy for enabling the alleged extraction, but cited such attacks as further justification for stricter export controls.
“Executing this extraction at scale requires access to advanced chips,” the company wrote in its blog post, stating that restricted chip access would limit “both direct model training and the scale of illicit distillation.”
The Epoch Times has reached out to the U.S. Department of Commerce for comments regarding Anthropic’s concerns.
Loading recommendations…









