Anthropic accuses three labs of industrial-scale distillation attacks on Claude
anthropic says it has identified industrial-scale campaigns by three AI laboratories that illicitly extracted capabilities from its Claude model, a practice it warns strips safety protections and amplifies national security risks.
Anthropic details three industrial-scale campaigns
Anthropic identified campaigns run by DeepSeek, Moonshot and MiniMax that generated over 16 million exchanges with Claude using approximately 24, 000 fraudulent accounts, behavior said to violate the model's terms of service and regional access restrictions. The company framed these actions as deliberate capability extraction rather than normal use.
How the groups used distillation and where the method is normally applied
The labs used a technique called "distillation, " which involves training a less capable model on the outputs of a stronger one. Distillation is a widely used and legitimate training method: frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But Anthropic warns distillation can be misused to acquire powerful capabilities from competitors quickly and at far lower cost than independent development.
National security risks and missing safeguards
Anthropic said illicitly distilled models lack necessary safeguards, creating significant national security risks. The company and other US firms build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities. Models built through illicit distillation are unlikely to retain those safeguards, and foreign labs that distill American models can then feed unprotected capabilities into military, intelligence and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns and mass surveillance.
Distillation attacks, export controls and chip access
Anthropic has consistently supported export controls to help maintain America’s lead in AI. The company says distillation attacks undermine those controls by allowing foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage export controls are designed to preserve. Without visibility into these attacks, apparently rapid advancements by those labs are incorrectly viewed as evidence that export controls are ineffective; Anthropic says those advancements depend in significant part on capabilities extracted from American models and that executing extraction at scale requires access to advanced chips. Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation.
Playbook, attribution and limits of the public account
The three distillation campaigns followed a similar playbook, Anthropic said, using fraudulent accounts and proxy services to access Claude at scale while evading detection. The volume, structure and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use. Anthropic said it attributed each campaign to a specific lab with high confidence through IP address correlation, request metadata and infrastructure indicators, and in some cases corroboration from industry partners who observed the same ac; unclear in the provided context.
Calls for rapid, coordinated action
Anthropic warned the campaigns are growing in intensity and sophistication and said the window to act is narrow. The company urged rapid, coordinated action among industry players, policymakers and the global AI community to detect and prevent distillation attacks and to preserve safeguards built into US-developed systems.
The company also cautioned that if distilled models are open-sourced, the risk multiplies as capabilities spread beyond any single government's control. Anthropic emphasized that addressing the problem will require both technical defenses and policy measures tied to chip access and export controls.
Next steps mentioned by Anthropic include continued investigation and coordination with industry partners; the context provided does not specify a timetable for formal actions or law enforcement involvement, and those details are unclear in the provided context.