Thursday, May 14, 2026
spot_imgspot_img

Top 5 This Week

spot_img

Related Posts

China’s Claude API Grey Market Sells AI Access at 90% Off — and Your Data Pays the Rest

Someone is selling access to one of the world’s most advanced AI tools — Claude, made by Anthropic — for as little as 10% of what it normally costs. The catch? A new investigation by Oxford researcher Zilan Qian reveals these “transfer stations” (中转站), China’s grey-market AI proxy networks, keep your data as the real payment. Here’s what’s actually going on, and why it matters even if you’ve never bought an AI API in your life. 

What Is the Claude API Grey Market?

Claude is an AI (artificial intelligence) assistant made by a US company called Anthropic. Developers use its API — an application programming interface, basically a technical gateway — to build apps and tools powered by Claude. Anthropic officially blocks access from mainland China, which has pushed a large underground economy to fill the gap.

These middlemen, known in Chinese developer communities as “transfer stations,” sit between a user and Anthropic’s servers. You send your request, they forward it along, and you get a response back — without ever needing a VPN, an overseas credit card, or an Anthropic account. You pay in RMB via WeChat or Alipay. Simple, cheap, and wildly popular.

Qian's investigation

According to Qian’s investigation, published via ChinaTalk on May 5, 2026, these services are openly advertised on GitHub, Taobao, Telegram, and even ranked by price and uptime in community repositories. At their peak discount, they sell 1 USD worth of Claude tokens for just 1 RMB — a 70–90% markdown.

How Do They Keep Prices So Low?

This is where it gets uncomfortable. Transfer stations don’t actually have a magical deal with Anthropic. They stay cheap through a mix of tactics that range from clever to outright criminal:

  • Bulk-registering free developer accounts to farm Anthropic’s $5 API credits
  • Splitting a single $200 Max subscription plan across dozens of users
  • Using stolen credit card details to create accounts at zero cost
  • Recruiting real people in lower-income countries to pass Anthropic’s photo ID and live selfie checks — a tactic borrowed from the Worldcoin biometric black market, where iris scans from Cambodia and Kenya were reportedly sold for under $30

Every new verification requirement Anthropic adds seems to produce a matching workaround. Anthropic is serious about blocking Chinese access — it now requires government-issued photo ID and a live selfie for some users, making it the first major consumer AI platform to do so. But the grey market has already adapted.

You Might Not Even Be Getting Claude

Here’s the thing that makes this worse than a simple resale scheme: you may not be getting what you paid for at all.

German researchers at the CISPA Helmholtz Center for Information Security audited 17 of these proxy services and found widespread model substitution. A service advertised as “Gemini-2.5” scored just 37% on a medical benchmark where the official API scored nearly 84%. 

CISPA Helmholtz Center for Information Security

Users who think they’re getting Claude Opus — the premium tier — may actually be receiving responses from cheaper models like Claude Haiku or even domestic Chinese alternatives like Qwen, relabelled to look authentic.

The output quality drops, but your subscription fee stays the same. You’re paying a premium for a knockoff.

The Real Business Is Your Data

According to the investigation, several Chinese developers told Qian directly: the access markup is essentially customer acquisition. The actual business is the logs.

Every prompt you send — your questions, your code, your documents — and every response you receive passes through the transfer station’s servers. The operators collect all of it. For people using Claude for coding work, that means complete reasoning chains, repository context, and verified outputs. Datasets of Claude Opus reasoning outputs with unclear origins are already circulating on HuggingFace, the popular AI model-sharing platform.

This connects to a bigger pattern. In February 2026, Anthropic reported that Chinese AI labs used a single proxy network managing more than 20,000 fraudulent accounts to run what it called “distillation attacks” — using Claude’s outputs to train competing models. 

Detecting and preventing distillation attacks

Anthropic said 16 million queries were used to copy its model’s capabilities. “The breadth of these networks means that there are no single points of failure,” Anthropic said. “When one account is banned, a new one takes its place.”

It’s Not Just a China-vs-US Story

It’s easy to frame this as a geopolitical AI rivalry, same like what Exigar is doing — and the White House has done exactly that, releasing a memo on April 23, 2026, warning that Chinese entities were running “industrial-scale” distillation campaigns using “tens of thousands of proxy accounts.”

But Qian’s research argues that both governments are misreading the situation. The transfer station economy isn’t just elite Chinese AI researchers stealing capabilities. It’s university students, professors, freelance developers, and hobbyists — anyone who wants access to better AI tools than what’s officially available to them. 

The South China Morning Post reports that these relay services advertise natively on Taobao and Xianyu, with sellers promising one-million-token context windows and compatibility with popular coding tools like Cursor and VSCode.

The logs that those everyday users generate are the commodity. And the harms don’t stop at the US-China border.

What Anthropic Is Doing About It

Anthropic’s response has been to keep layering on verification requirements. The live selfie and government ID check, introduced in April 2026, is the latest step. The subsidiary ownership rule from September 2025 closed the loophole that allowed Chinese-backed companies operating abroad to retain access.

None of it has worked so far. The grey market has adapted to each new barrier. Stolen credentials get replaced. New accounts get farmed. Real people in lower-income countries get paid small amounts to pass KYC checks on behalf of operators who profit far more.

Why This Should Worry You — Even Outside China

If you’re a developer or startup using any third-party AI proxy service — not just in China — this is a relevant warning. As Fortune noted in its analysis of this trend, prompts often contain support tickets, customer records, unreleased product plans, internal code, sales notes, and contracts that teams would never intentionally upload to an unknown data broker.

It’s also worth noting that this kind of data theft from AI systems isn’t isolated to grey-market operations. Earlier this year, hackers breached Instructure Canvas — one of the world’s largest education technology platforms — and stole student data at scale, in a reminder that anywhere personal data flows through unaudited third-party systems, it’s at risk.

hackers breached Instructure Canvas

The pattern is consistent: when a platform handles sensitive data at high volume, and oversight is limited, someone will find a way to monetize that data.

If you’re a developer using third-party Claude proxy services to cut costs, the risk-reward calculation is worse than it looks. Your prompts, your codebase context, your API structures, and your authentication logic are passing through servers you know nothing about. That’s not a theoretical risk — it’s the stated business model of the operators running these services. 

Be cautious of unsolicited outreach in the coming weeks, especially anything that references specific course details or school communications. The cheapest route to a tool is rarely the cheapest route to security. 

The post China’s Claude API Grey Market Sells AI Access at 90% Off — and Your Data Pays the Rest appeared first on Memeburn.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles