BanklessTimes
Home Articles OpenAI, Google, Anthropic Team Up to Block Chinese Scraping

OpenAI, Google, Anthropic Team Up to Block Chinese Scraping

Simon Simba
Simon Simba
Simon is a writer with five years experience in crypto and iGaming. He currently works as a freelance writer at BanklessTimes where he focuses on simplifying daily crypto developments for readers. He discovered crypto in 2022 while writing news about NFTs for a news website in the US, and has since written for two other international NFT projects, and a Web3 gaming agency.
Updated: April 7th, 2026

OpenAI, Anthropic, and Google are joining forces to stop Chinese rivals from copying their most advanced AI models. The rare alliance shows how seriously U.S. labs now treat model theft and what they call a growing national security risk.

Why Big AI labs are Teaming Up

According to a Bloomberg report, the three companies are sharing threat intelligence through the Frontier Model Forum, an industry nonprofit they founded with Microsoft in 2023. They use that channel to flag attempts by China-based users to “distill” their models in ways that break platform terms of service.

In these cases, Chinese developers repeatedly query systems like ChatGPT, Claude, or Gemini, then use the outputs to train cheaper copycat models. U.S. officials estimate that these unauthorized distillation efforts cost Silicon Valley labs billions of dollars in lost profit each year.

Distillation is a standard machine learning technique where a smaller “student” model learns from a larger “teacher” model’s answers, often gaining similar capabilities at lower cost. OpenAI, Google, and Anthropic all use distillation internally to compress their systems, and they allow some outside use when it does not compete directly.

The problem, they say, is “adversarial distillation,” where outside labs hammer U.S. models with automated prompts to clone core behavior and safety research without investing in original training. OpenAI told U.S. lawmakers that Chinese firm DeepSeek tried to “free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.” Anthropic has also accused Chinese-linked companies, including DeepSeek, Moonshot, and MiniMax, of extracting capabilities from its Claude model and then stripping away safety guardrails.

New Detection and Enforcement Tactics

In response, the corporations are developing technologies to identify abnormal traffic patterns that may indicate attempts at automated cloning or scraping. They search for indications that a bot, not a regular user, is attempting to reconstruct the model, such as high-volume, organized searches and repeated prompts across accounts.

To make copying more difficult, the companies may cancel accounts, ban IP ranges, or alter rate limits and output formats. In a memo to Congress, OpenAI has already revised its terms of service and emphasized model extraction as a security and economic concern. After identifying numerous scraping efforts, Anthropic last year completely banned Chinese-controlled companies from using Claude.

READ MORE: Filecoin (FIL) Eyes $11.40 as Falling Wedge Breakout Nears

Follow Bankless Times on Google News

We`ve got crypto covered – every trend, every insight, every move that matters. Add us to your feed and stay ahead of the market.

Contributors

Simon Simba
Simon is a writer with five years experience in crypto and iGaming. He currently works as a freelance writer at BanklessTimes where he focuses on simplifying daily crypto developments for readers. He discovered crypto in 2022 while writing news about NFTs for a news website in the US, and has since written for two other international NFT projects, and a Web3 gaming agency.