Anthropic Says Three Labs Ran Industrial-Scale ‘Distillation’ Campaigns on Claude

Anthropic Says Three Labs Ran Industrial-Scale ‘Distillation’ Campaigns on Claude

anthropic has identified industrial-scale campaigns by three named laboratories—DeepSeek, Moonshot and MiniMax—that it says illicitly extracted the capabilities of its Claude model. The company flags more than 16 million exchanges conducted through about 24, 000 fraudulent accounts and warns the activity could strip critical safeguards, amplify national security risks and undermine export-control regimes.

Anthropic’s attribution and evidence

Anthropic attributes each campaign to a specific lab with high confidence, citing IP address correlations, request metadata, infrastructure indicators and, in some instances, corroboration from industry partners who observed the same activity. The company says the campaigns all followed a similar playbook and that the volume and pattern of requests differed markedly from ordinary usage, indicating deliberate capability extraction rather than legitimate customer queries.

DeepSeek, Moonshot and MiniMax campaigns

The three laboratories named are said to have generated in excess of 16 million exchanges with Claude using approximately 24, 000 fraudulent accounts, actions that violated terms of service and regional access restrictions. The stated goal was to improve their own models by training smaller systems on the outputs of Claude, a technique Anthropic describes as “distillation. ” Distillation itself is acknowledged as a legitimate method—frontier labs often distill larger models to create smaller, cheaper versions for customers—but Anthropic warns it can also be deployed illicitly to shortcut capability development.

Claude access and fraudulent accounts

the account of events, the campaigns made use of fraudulent accounts and proxy services to access Claude at scale while evading detection. Anthropic highlights that the volume, structure and focus of the prompts were distinct from normal usage patterns, a characteristic it interprets as evidence of targeted capability extraction rather than routine model interaction.

National security risks and export controls

Anthropic argues illicitly distilled models are unlikely to retain the safeguards designed to prevent misuse, creating concrete national security risks. The company points to possible misuse that includes enabling the development of biological threats and facilitating malicious cyber activity. It also warns foreign labs that distill American models could feed unprotected capabilities into military, intelligence and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns and mass surveillance. If distilled models are open-sourced, Anthropic says, the risk multiplies as capabilities can spread beyond any single government's control.

What makes this notable is the company's emphasis on the interaction between technical extraction and hardware access: Anthropic states that executing extraction at scale requires access to advanced chips, and that this dynamic reinforces the rationale for export controls. The firm has consistently supported export controls to help maintain America’s lead in AI, and it contends that distillation attacks erode those controls by allowing foreign labs—including those subject to the control of the Chinese Communist Party—to close the competitive advantage that export restrictions are intended to preserve.

Industry action, detection and prevention

The campaigns are described as growing in intensity and sophistication, and Anthropic warns the window to act is narrow. The company calls for rapid, coordinated action among industry players, policymakers and the global AI community to detect and prevent further illicit distillation. Without enhanced visibility into these attacks, Anthropic cautions that apparent rapid advancements by other labs will be misread as evidence that export controls are ineffective; the company argues those advancements in many cases depend on capabilities extracted from American models.

Anthropic also underscores that distinguishing legitimate distillation—used to create compact, customer-ready models—from illicit extraction will be central to any defensive strategy, and that detection methods must account for the use of proxy services and atypical prompt structures. The company’s account leaves unclear in the provided context whether legal or regulatory steps have been initiated beyond calls for coordination, but it frames the issue as an urgent cross-border challenge involving technical, policy and supply-chain elements.