Anthropic Caught DeepSeek STEALING From Claude, Then Got SUMMONED to the Pentagon!
8:11

Anthropic Caught DeepSeek STEALING From Claude, Then Got SUMMONED to the Pentagon!

Universe of AI 24.02.2026 5 120 просмотров 116 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Anthropic just exposed three Chinese AI labs — DeepSeek, Moonshot, and MiniMax — for running industrial-scale attacks to steal Claude's capabilities. Then the Pentagon called. Here's the full story. Source: https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks For hands-on demos, tools, workflows, and dev-focused content, check out World of AI, our channel dedicated to building with these models: ‪‪ ⁨‪‪‪‪‪‪‪@intheworldofai 🔗 My Links: 📩 Sponsor a Video or Feature Your Product: intheuniverseofaiz@gmail.com 🔥 Become a Patron (Private Discord): /worldofai 🧠 Follow me on Twitter: https://x.com/UniverseofAIz 🌐 Website: https://www.worldzofai.com 🚨 Subscribe To The FREE AI Newsletter For Regular AI Updates: https://intheworldofai.com/ #deepseek #ainews #claude #AI #ChatGPT #OpenAI #ArtificialIntelligence #Pentagon #China #ChinaAI #AIWar #TechNews #MiniMax #moonshotai #Anthropic DeepSeek, Anthropic, Claude, AI, ChatGPT, OpenAI, Artificial Intelligence, AI News, Pentagon, China, China AI, AI War, Tech News, MiniMax, Moonshot AI, DeepSeek AI, AI Theft, Machine Learning, AI Models, Dario Amodei, AI Security, US China Tech War, AI Competition, Large Language Models, LLM, AI Training, Tech War, AI 2026, Silicon Valley, AI Industry, AI Regulation, Export Controls, AI Distillation, AI Espionage, Tech Giants, Future of AI 0:00 - Anthropic Catches China Stealing 4:31 - Elon Roasts Anthropic 6:05 - Pentagon Calls Anthropic 7:56 - Outro

Оглавление (4 сегментов)

Anthropic Catches China Stealing

Anthropic published a report this week accusing three Chinese AI labs of systematically extracting capabilities from claude. The labs are Deepseek, Moonshot AI, and Miniax. Together, they generated over 16 million conversations through roughly 24,000 fraudulent accounts. According to them, here's what actually happened. The technique is called distillation. The basic idea is straightforward. you train a weaker model on the outputs of a stronger one. It's a completely standard practice and labs use it all the time to compress their own models into smaller, cheaper versions. The problem is you can also do it to someone else's model. Instead of building capabilities from scratch, you curate a competitor's system at scale, collect the responses, and use those as training data. You're effectively transferring capabilities without doing the underlying research. Enthropic estimates this saves competitors significant time and cost compared to independent development. Enthropic describes three separate campaigns, each following a similar playbook, proxy services, fake accounts, and highly structured prompts targeting specific capabilities. Deepseek ran the smallest operation by volume around 150,000 exchanges, but it was technically sophisticated. One notable tactic, they prompted Clot to reconstruct the internal reasoning behind its own answers step by step. This generates chain of thought training data, which is exactly what you need to build reasoning models. They also use Claw to rewrite politically sensitive questions into censorship safe alternatives likely to train their own models to navigate those topics the way Chinese platforms require. Enthropic traced the accounts to specific researchers at DeepSeek through request metadata. Moonshot AI, the company behind the Kimmy models, ran 3. 4 million exchanges across hundreds of fraudulent accounts. They varied account types to make the campaign look less coordinated. Anthropic linked it to senior moonshot staff through public profile matching. In a later phase, they shifted toward trying to extract Claude's reasoning traces directly. Minia accounts for the bulk of it. 13 million exchanges focused on agentic coding and tool use. What makes their case notable is that Enthropic detected the campaign while it was still active before Minia has shipped the model they were building. When Enthropic released a new clawed version mid campaign, Minia redirected nearly half their traffic to it within 24 hours. All three labs faced a basic problem. Enthropic doesn't offer commercial access to claude in China. So they went around it. They use what Anthropic calls hydrocluster architectures. Networks of fraudulent accounts spread across Enthropic's API and thirdparty cloud platforms. One proxy network managed more than 20,000 fake accounts simultaneously, mixing distillation traffic with normal requests to make detection harder. When accounts got banned, new ones replaced them. No single point of failure. Enthropic makes two arguments for why this is a broader problem than just IP theft. The first is about safety. Enthropic and other US labs build in restrictions to prevent their models from helping with things like boweapons development or offensive cyber attacks. A distilled model doesn't necessarily carry those restrictions over. The capabilities transfer, but the safeguards may not. The second is about export controls. The US restricts chip exports to China partly to slow down Frontier AI development. Distillation attacks are a workaround. You don't need to train a model from scratch if you can extract capabilities from someone who already did. Enthropic argues this actually strengthens the case for chip controls since running distillation at this scale still requires significant compute. Enthropic says that they built classifiers to detect distillation patterns in API traffic, tighten verification on account types that were being exploited and are sharing indicators with other labs and cloud providers. They're also working on model level countermeasures designed to reduce how useful Claude's outputs are for training without affecting legitimate users. They're publishing the report publicly because they think the problem requires a coordinated industry and policy response, not just an individual company defense. Whether that coordination happens is another question, but the report is worth reading. The link is also in the description for you guys. Not everyone

Elon Roasts Anthropic

is taking Anthropic's report at face value. The push back started almost immediately. Elon Musk's response on X was pretty representative of the skeptic camp. How dare they steal the stuff Enthropic stole from human coders, pointing to the fact that Enthropic, like every other major AI lab, trained its model on vast amounts of internet data without explicit permission from the people who wrote it. The argument being, it's a bit rich to cry theft when your own foundation is legally and ethically contested. That criticism has real traction. Several replies echoed the same point. One commenter noted as opposed to other AI companies which use organically harvested data from farmers which is a sarcastic way of saying nobody's hands are clean here. The broader critique is about consistency. AI labs have largely won the argument that scraping public data for training is either legal or at minimum tolerable. but they're now asking for sympathy and potentially regulatory protection when competitors do something structurally similar to them. Whether you think those two things are equivalent is genuinely debatable, but the optics are awkward. There's also a geopolitical angle to this. Some people read the report less as a neutral disclosure and more as anthropic lobbying for a stricter export controls and API access restrictions. policy outcomes that happen to benefit US labs competitively. But that doesn't mean the findings are false. But it's worth noting who benefits from this narrative. But here's where the story

Pentagon Calls Anthropic

gets a little bit more complicated. On the same day Anthropic published this report, Axios broke a separate story. Defense Secretary Pete Heget has summoned Enthropic CEO Dario Amodi to the Pentagon. According to a senior defense official, this is not a routine meeting. Their exact words to Axios were, "This is not a get to know you meeting. This is not a friendly meeting. This is a sh or get off the pot meeting. " Meaning, Enthropic is apparently being pressured to formally commit Claude to military use. Think about what that means for how you read the distillation report. Enthropic is walking into the Pentagon under pressure to prove its national security asset. And on the exact same day, they published a detailed report about Chinese labs stealing American AI capabilities, undermining export controls, and threatening national security. This is a very convenient story to have in your back pocket before that meeting. Again, the attacks are probably real, and the technical detail in the report is specific enough to be credible, and other labs apparently corroborated some of it independently. But this happened and republishing this now for purely neutral reasons are two different things. Frontier AI labs are increasingly operating in a space where they need government relationships, defense contracts, and regulatory goodwill to survive at the top. Anthropic has been more cautious than some competitors about military applications, and that's been part of their public identity. But that position appears to be under pressure. And a report that frames Claude as a strategic national asset worth protecting lands very differently in that context than it would on a slow news day with no Pentagon meeting on the calendar. Whether the timing is a coincidence, you can decide, but it's the most interesting part of this whole story.

Outro

But that's it for today's video. Make sure to subscribe to the channel, follow Universe of AI on Twitter, as well as subscribe to our main channel, World of AI, and subscribe to the newsletter as well. You don't want to miss this. Hope you guys enjoyed today's video.

Другие видео автора — Universe of AI

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник