Anthropic has accused three leading Chinese AI research labs of deploying a highly organized, industrial-scale campaign to siphon off the core capabilities of its Claude models. The company claims that DeepSeek, Moonshot AI, and MiniMax collectively used over 24,000 fake accounts to generate more than 16 million interactions with Claude, systematically extracting reasoning, coding, and tool-use capabilities that took years and billions of dollars to develop.

The revelation marks a dramatic escalation in the simmering tensions between U.S. and Chinese AI developers, framing the issue not just as an intellectual property dispute but as a potential national security threat. Anthropic’s technical analysis suggests these labs bypassed regional access restrictions through sophisticated proxy networks, effectively turning AI model access into a high-stakes espionage operation.

Anthropic’s accusations come as Washington debates stricter export controls on advanced AI chips, which are essential for training frontier models. The company’s CEO, Dario Amodei, has long argued for tighter restrictions, and this disclosure appears designed to reinforce that position by demonstrating how foreign labs can circumvent those controls through large-scale theft rather than independent innovation.

The Distillation Arms Race

At the heart of the controversy lies a technique called distillation, a process where a smaller AI model learns from the outputs of a larger, more capable one. While widely used in legitimate model development, distillation can be weaponized: a competitor can pose as a legitimate user, bombard a model with carefully crafted prompts, and collect its responses to train a rival system. This effectively allows a lab to replicate advanced capabilities without the cost or time of original research.

Anthropic’s claims go far beyond academic experimentation. The company alleges that DeepSeek, Moonshot, and MiniMax ran highly coordinated campaigns, using fraudulent accounts to generate structured, high-volume interactions with Claude. These weren’t random queries—they were meticulously designed to extract reasoning traces, coding logic, and even censorship workarounds, effectively reverse-engineering Claude’s most sophisticated features.

DeepSeek, in particular, is accused of deploying the most technically sophisticated operation, generating over 150,000 exchanges. The prompts were designed to elicit chain-of-thought reasoning—a technique where Claude breaks down its decision-making process step-by-step—effectively creating a training dataset of its internal logic. Anthropic also claims DeepSeek targeted politically sensitive queries, generating alternatives to censored topics, suggesting an effort to train models that could evade restrictions on sensitive subjects.

Moonshot AI and MiniMax followed similar tactics, though on a larger scale. MiniMax, the least publicly known of the three, generated over 13 million exchanges—more than three-quarters of the total—focusing on agentic coding and tool-use capabilities. Anthropic’s forensic analysis suggests MiniMax pivoted rapidly when new Claude models were released, redirecting traffic within 24 hours to capture the latest capabilities.

How the Labs Bypassed Restrictions

Anthropic does not offer commercial access to Claude in China, citing national security concerns. Yet the labs managed to bypass these restrictions through what the company describes as hydra cluster architectures—vast networks of fraudulent accounts distributed across proxy services. These networks operate like a decentralized infrastructure, where banning one account simply triggers the creation of another, making detection nearly impossible at scale.

In one case, a single proxy network managed over 20,000 fraudulent accounts simultaneously, blending distillation traffic with legitimate customer requests to evade detection. The scale and sophistication of these operations suggest a mature, well-funded ecosystem dedicated to circumventing access controls—a system that may extend far beyond the three labs named.

A National Security Threat, Not Just IP Theft

Anthropic’s framing of this issue as a national security crisis rather than a purely legal dispute reflects the limitations of current intellectual property laws. While copyright and contract violations could theoretically apply, proving theft in this context is legally complex. The outputs of AI models are not always considered copyrightable, and even if they were, the ownership structure of frontier labs—where users often retain rights to model outputs—complicates enforcement.

Instead, Anthropic argues that illicitly distilled models pose direct risks to democratic security. Without the safeguards built into Claude—such as protections against bioweapon development, cyberattacks, or mass surveillance—these models could be repurposed for malicious use. The company warns that foreign labs could feed unchecked capabilities into military, intelligence, or surveillance systems, enabling authoritarian governments to deploy AI for offensive cyber operations, disinformation, or large-scale monitoring.

This aligns with Anthropic’s broader advocacy for stricter chip export controls. The company argues that distillation attacks undermine the effectiveness of these controls by allowing foreign labs to close the competitive gap through theft rather than innovation. Without visibility into these operations, rapid advancements by Chinese labs might be mistakenly interpreted as evidence that export restrictions are ineffective—a narrative that could weaken global security measures.

Industry-Wide Fallout

Anthropic’s disclosure is likely to have immediate repercussions across the AI industry. The company has outlined a multipronged defensive strategy, including advanced classifiers to detect distillation patterns, behavioral fingerprinting of suspicious API traffic, and stricter verification for educational and research accounts—the most common pathways for fraudulent access.

However, Anthropic acknowledges that no single company can solve this problem alone. The company is sharing technical indicators with other AI labs, cloud providers, and policymakers, calling for a coordinated response that includes stronger industry-wide safeguards, regulatory oversight, and potentially new export controls.

For technical decision-makers, the implications are clear: API security is no longer just a backend concern—it’s a strategic priority. The proxy infrastructure enabling these attacks is vast, adaptive, and likely targeting every frontier AI lab with an open API. The era of treating model access as a straightforward commercial transaction may be over, replaced by a landscape where espionage and theft are as much a concern as innovation.

The question now is whether Washington will treat these accusations as an act of espionage or merely the cost of doing business in an era where intelligence itself has become a tradable commodity. If the evidence holds, it could reshape the global AI race—and the geopolitical power dynamics that define it.