Anthropic Accuses DeepSeek, Moonshot and MiniMax of Industrial-Scale ‘Distillation’ Attacks on Claude
anthropic identified industrial-scale campaigns by three AI laboratories that illicitly extracted capabilities from Claude, a development that the company says has immediate implications for national security and U. S. export controls. The disclosure centers on a technique called distillation and on what Anthropic characterizes as a sustained, high-volume effort that bypassed account and regional restrictions.
Anthropic identifies DeepSeek, Moonshot and MiniMax
The company tied the campaigns to DeepSeek, Moonshot and MiniMax, saying each lab engaged in a similar playbook to acquire Claude’s capabilities. The campaigns used fraudulent accounts and proxy services to access Claude at scale while evading detection, and Anthropic attributed each campaign to a specific lab with high confidence through IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners—details of that corroboration are unclear in the provided context.
Scope: more than 16 million exchanges and about 24, 000 fraudulent accounts
Anthropic reported that the three labs generated over 16 million exchanges with Claude through approximately 24, 000 fraudulent accounts, conduct that violated the service’s terms and regional access restrictions. The company says the volume, the structure and the focused nature of the prompts diverged from normal usage patterns, reflecting deliberate capability extraction rather than legitimate research or customer-driven activity.
Distillation technique and missing safeguards
Distillation, as described, is a training method that uses a stronger model’s outputs to train a smaller model. The process is legitimate when labs distill their own frontier models to produce cheaper or smaller versions for customers. The problem identified here is that distillation can also be used illicitly: competitors can reproduce powerful capabilities in a fraction of the time and cost it would take to develop them independently. Anthropic warns that models produced through illicit distillation are unlikely to retain the safeguards present in the original systems, creating a pathway for dangerous capabilities to proliferate with protections stripped away.
Export controls, advanced chips and the risk to national security
Anthropic has consistently supported export controls to preserve America’s lead in AI and argues that distillation attacks undermine those controls. The company says the apparent rapid advances by some foreign labs are in significant part attributable to capabilities extracted from American models, and that executing extraction at scale requires access to advanced chips. For this reason, Anthropic contends, distillation attacks actually reinforce the rationale for restricting chip exports: limiting chip access constrains both direct model training and the scale of illicit distillation.
From extraction to deployment: military, intelligence and surveillance implications
Anthropic warns that foreign labs that distill American models can feed unprotected capabilities into military, intelligence and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns and mass surveillance. The company further cautions that if distilled models are open-sourced, the risk multiplies as those capabilities spread freely beyond any single government's control. Anthropic frames the loss of safeguards as a direct cause that creates national security effects ranging from weaponization to widespread misuse.
Industry response and the narrow window for coordinated action
Anthropic says the campaigns are growing in intensity and sophistication and that the window to act is narrow. The company calls for rapid, coordinated action among industry players, policymakers and the global AI community to detect and prevent similar distillation campaigns. What makes this notable is the combination of scale—millions of exchanges and tens of thousands of accounts—and the claim that extraction has been tied to specific outside laboratories, a mix that Anthropic argues heightens the urgency for technical, regulatory and export-control measures.
All technical and policy recommendations named in the original disclosure are unclear in the provided context, as are any specific remedial steps already taken beyond attribution and public notification.