# The US Government is Threatening to SEIZE Claude

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=qiUuwdaVkYo
- **Дата:** 26.02.2026
- **Длительность:** 16:26
- **Просмотры:** 10,806
- **Источник:** https://ekstraktznaniy.ru/video/11904

## Описание

🎓 Learn AI In 10 Minutes A Day - https://www.skool.com/theaigridacademy
Get your Free AGI Preparedness Guide - https://theaigrid.kit.com/agi
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Learn AI Business For Free AI https://www.youtube.com/@TheAIGRIDAcademy


Links From Todays Video:
https://thezvi.substack.com/p/anthropic-and-the-department-of-war

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#

## Транскрипт

### Segment 1 (00:00 - 05:00) []

This is one of the biggest stories right now. Anthropic and the US government/ Pentagon are basically at serious conflict because they cannot agree on how they would like to implement Claude for military usage. So let's talk about it. So, one of the biggest things that we're seeing right now is that there is this exclusive article and just numerous different statements going around that the US government essentially want unfiltered access to Claude, but Anthropic are basically saying, "No, we're not allowing you to use Claude for killbots. " And I know the situation is a little bit more nuanced, but we need to dive into all of these specifics. So, this situation has gotten so crazy that apparently Anthropic has been given a deadline of 5:00 p. m. Eastern time on Friday to modify its existing agreed upon contracts to grant unfettered access to Claude or else. And the thing is, I really, really have to commend Anthropic here because they aren't backing down whatsoever. They are literally saying, "No, we're just going to stand by our values when we made this company and we're still going to do the right thing. " You can see here that Anthropic has no intention to ease restrictions on military usage source says. And of course, the thing is that Anthropic is not backing down. You see that it says Anthropic has been insistent up to now that it will not back down on surveillance or autonomous weapons. And these are two areas that Amod has personally raised when discussing the dangers of AI. Now before we get onto Anthropic not backing down, I was reading this substack from Z Moshowitz and it was talking about the fact that Anthropic cannot fold. It talks built their entire brand and reputation being the responsible AI company and that ensures that its AIs won't be misused or misaligned. And Anthropic's employees actually care about this and that's how they managed to recruit the best people and how it became the best AI lab. And that's why it's the choice for enterprise AI. The commitments have been made and the initial contract is already in place. And it talks about the fact that Anthropic is essentially at an existential level reputational and morale problem here. They backed into a corner and they cannot give in. If Anthropic reverse course now, it would lose massive trust with employees and enterprise customers and potentially the trust of its own AI. And if it were to go back on those red lines, it might lose a very large fraction of its employees. Now, you might be thinking, well, surely the government aren't asking for much. Surely Anthropic Why can't they just reach a deal? Well, the thing is that Anthropic aren't asking for something crazy here. And that's the big issue. You see here that someone says Anthropic's red lines are no mass surveillance and no autonomous killer robots. It should terrify everyone that the Pentagon finds these safeguards to be outrageous. And so, that is very true. The only thing that Enthropic is basically saying, look, we are just not going to allow you to legally do this. You know, we're just going to have this in the terms and conditions when we give you Claude that you are not allowed to have mass domestic surveillance and kinetic weapons without a human in the kill chain until we're ready. And I think those are pretty reasonable asks. Okay, because these guys are the leading lab, I will say that this has become more an issue because if it was just some random AI company, they wouldn't really care. But anthropic are currently the leading lab and of course this is the point of contention. Now the Pentagon and the US government are basically really annoyed by this. You can see here that they said what they're going to do is take the first step towards a potential designation of anthropic as a potential supply chain risk. And this is not really good because that penalty is usually reserved for companies from adversarial countries such as Chinese tech giant Huawei. So using this to punish a leading American firm, particularly when one with the military that it currently uses itself, would be very unprecedented. And I don't know why they would want to do this. I'm sure they can be grown up and adults and genuinely make a deal, but it seems like America's ego is in the way or whoever's in there. It just seems like an ego thing. I mean, take a look at this. Okay, it says here that the refused that referring to the possible supply chain risk designation earlier this week, senior defense official told Axio, it will be an enormous pain in the ass to disentangle and we are going to make sure they pay a price for forcing our hand like this. I mean, that's pretty weird. That doesn't sound like a national security policy. It sounds like someone who's embarrassed in a negotiation. Like the whole thing of framing forcing our hand is pretty insane. Anthropic didn't force anything. They just said, "Look, if you want our technology, here are the terms. " And they're not crazy. We're not saying you have to like, you know, give us updates every 5 seconds. Just no mass, you know, surveillance and no killer robots. Okay, that's it. Not even you can have killer robots. You just can't have the AI going off and doing it by itself. And I mean, you know, the Pens are going to agree to those contract terms initially when they sign the contract. But because Enthropic is like, look, we're not going to completely cave here. We got a backbone. The senior defense official is basically going on record to a major news outlet saying we're going to punish them. Which

### Segment 2 (05:00 - 10:00) [5:00]

if anything, it kind of undermines their own argument that this is about national security. Because if it was about national security, I'm sure you wouldn't be making personal statements like we're going to make sure they pay a price. I mean, when you think about this, the irony is that this kind of rhetoric makes Anthropic less likely to compromise because now if they back down, it makes them look like they caved to a bully rather than that they reached a reasonable agreement. This just backs both sides into a corner and this just raises a serious question. The pentagon are going to threaten Anthropic and intimidate a private company over very minor contract terms. That's kind of proving Anthropic's point about why you'd want guardrails and how they would use that technology in the first place. The entire thing is absolutely crazy. If you're wondering about what other AI labs are doing, well, they don't care as much as Anthropic. XAI have already reached a deal. Musk's XAI and Pentagon have reached a deal to use Grossk in classified systems, you can see that Elon Musk already signed a deal to move the military's classified systems under the all lawful use that anthropic has rejected. Now, if you're wondering what is the all lawful use, this sounds reasonable on the surface and essentially like, you know, we'll only use it for legal stuff, of course. But that's not the problem. It's so broad that it's almost meaningless as a restriction. Okay, so think about it this way, right? Almost anything the military wants to do can be framed as lawful because the military gets to define what missions are authorized. The president can authorize the law of things unilaterally and lawful in a military context is way broader than what a civilian court would consider lawful. And of course, who's going to check anyway once it's inside the classified systems. So an anthropic is saying, look, no autonomous weapons, no mass surveillance of Americans. Those are specific concrete red lines. You know exactly what you cannot do. And so when the Pentagon is trying to say or lawful use, that is basically saying, "Look, no red lines at all. " Because they're just saying, "Look, we promise. We promise, fingers crossed, okay? " While simultaneously being the ones being able to decide what lines get crossed. And if you didn't know already, there was a Maduro raid, which is the perfect example. Was that lawful? The Pentagon says yes. But a lot of legal scholars would debate that. But if Anthropic had already agreed to all lawful use, they'd have zero leverage to even ask the question. It's essentially asking, you know, that Anthropic just hand over the keys and trust him, which is exactly what Dario Amade said he's not comfortable with doing with this particular administration. And the fact that XAI have already agreed to this immediately kind of tells you a lot about Elon Musk's relationship with the current government. And so, you know, this all lawful use, you can see right here, there was an update literally an hour ago. And you can see scene sea parnell says the department of war has no interest in using AI to conduct mass surveillance of Americans which is illegal. Nor do we want to use AI to develop autonomous weapons that operate without human judgment. This narrative is fake and being pedled by the leftists in the media. Here's what we're using. This is a simple common sense request that will enable and prevent Anthropic from jeopardizing critical military operations and potentially putting our war fighters at risk. will not let any company dictate the terms regarding how we make operational decisions. They have until 5:01 p. m. on Eastern time Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for the D. Now, the problem here, guys, and this is just where things get kind of funny because Anthropic is actually number one. You know, previously when Chat GBT was number one or Google was leading the AI race in terms of frontier models, they would have a lot more leeway on what could be said. But now, Anthropic is number one and it doesn't look like they're going to be slowing down. You see, the problem is that offloading and replacing Claude would be a very difficult process because it's already, you know, ingrained into their systems. So that's why these guys are kind of annoyed because it's like look, we don't have to get rid of the best AI system even if the other companies have agreed to use it. We know they're not as good as Anthropic. I don't know what Anthropic are doing in their headquarters, but they seriously have developed something that is probably good. And I think one of the key things that maybe we're all kind of missing, and this is just a tidbit, but I think maybe when you actually train an AI to be a good, ethical, moral person, maybe it just does better at reasoning. Who knows? And I'm going to come back to that point later, but you can see right here it says, "One source familiar with these discussions described the Claude as the most capable model in a number of military use cases, but they described Google's Gemini as a strong alternative, but we all know the Claude by far in the most use cases is going to be number one. " Now, of course, one solution and this is from once again his Substacks Fish Waritz from Z Substack and he says if the Pentagon simply cannot abide by the current contract, the Pentagon can amicably terminate that $200 million contract with Anthropic once it has arranged for a smooth transition to one of Anthropic's many competitors. Of course, we've already seen that they already got a deal with XAI, and that of

### Segment 3 (10:00 - 15:00) [10:00]

course might not be their first or even second or third choice, but of course, it is an option. They talk about the fact that Anthropic, they don't even need this contract. It constitutes less than 1% of their revenues and they're pretty much taking a loss on it in order to help out national security. So, I mean, I guess they're doing them a favor. So, they probably should just termit the contract and just leave it at that. But, of course, they are thinking about, you know, invoking the Defense Production Act. and they said, "We will ensure the Defense Production Act is invoked on anthropic, compelling them to be used by the Pentagon regardless of if they want to or not. " Okay? And like I said, that's why I said I think this is like an ego thing. I don't know why. They just maybe they're just used to having their own way. I don't really pay attention to politics that much, but okay. So, yeah, this is where things start to get tricky because if they do invoke this, it's not going to give them the result that they want. And I'm going to tell you why. A lot of these people think that they can just force everyone to do everything. But oftentimes that is the worst case for absolutely everyone. And I'm going to show you guys why. So the Defense Production Act gives the president the authority to compel private companies to accept and prioritize particular contracts as required for national defense. And it was used during the pandemic to increase the production of vaccines and ventilators, for example. And the law is rarely used in such a blatantly adversarial way. and they stood that the idea the senior defense official said is that they would be forced you know be forcing Anthropic to adapt the model to the Pentagon's need without any safeguards. Now here's the problem okay anthropic is probably going to collapse if they do this because in the substack it says here if they did actually successfully nationalize Anthropic to this extent presumably Anthropic would quickly seize to be anthropic the technical staff would quit in droves rather than be a part of this. things that allow the lab to beat rivals like Open and Google would seize the function. It would be a shell of a company. Many would flee to other countries to try again and the Pentagon would not get the product or result that it thinks it wants. And I completely agree with this. If this happens, it's not going to be anthropic anymore because Anthropic would never do that. Okay? And the people that work there, they simply would not want to work there. The people that work in Anthropic, they don't care about money. status. They care about genuinely impacting the world in a good way which is why they are choosing to work there. They want to work on safety. alignment. They want to have an ethical AI and that is their end goal which is why they are currently number one. Why they have the highest retention out of all the top AI labs in terms of keeping talent out there. You know Meta is throwing around these billion dollar offers and multi multi-million dollar offers and people are choosing to stay anthropic. So if the Pentagon decides, okay, we're going to nationalize this company. I mean, you're pretty much going to, you know, implode that company because nobody is going to want to work there. Now, of course, when you think about what they want to get as well is this is a version of Clawude that basically does absolutely anything. And you can think about why that is such a terrible idea because if you have an AI that essentially demands and you know well basically just if you're demanding an AI that basically you know obeys any order it's pretty bad because you know if you intend to hook that up to the military's weapons what if someone else hacks in hijacks those weapon systems or similar other things. This is pretty incredible stuff when you start to you know go down the rabbit hole and think about all the worst case scenarios. Now, one thing that I think is probably one of the most interesting things is that this article, the Substack, which is just a really interesting thought on the entire situation, and I actually agree with many points, is why I've included it in this video, is that the damage is already done. Okay? And it says for, you know, one of the points that most people are going to think is weird, stupid, but it does matter, is that the entire in incident and what happens next is going to be going into the training data of Claude. AIS will know what you're trying to do and even more so than all of the humans, they will react accordingly. It will not be something that can be suppressed. You are not going to like the results. The damage has already been done. And one thing that the Pentagon is, you know, underestimating is how much anthropic cares about what future Claude will make of the situation. So, I mean, think about it. When you're training an AI model, you're not flipping switches. You're shaping the way it entirely thinks. And remember, Claude has that internal value and safety features that aren't just a separate layer that's just bolted over the top. They're baked into the entire model. So if the Pentagon forces Anthropic to create a restricted Claude with those guardrails removed, they'd essentially be doing a crude aggressive retraining on a model that was carefully built to resist that. The result would be a likely degraded, inconsistent model that's probably worse at everything, not just the safety stuff. Now remember guys, when you think about, well, what makes Claude Claude? This is the really interesting philosophical point. The argument is that Claude's helpfulness and its ethics aren't separate things. being genuinely good at reasoning, being honest, being careful. All of those come from the same training data, and that makes Claude not want to do harmful things. So, you're not going to be able to surgically remove one without degrading the other. It's like trying to make someone completely ruthless by

### Segment 4 (15:00 - 16:00) [15:00]

damaging their conscience. You might end up with someone erratic and unreliable instead. So, remember as well, future clawed models are essentially going to know the governments tried to forcibly remove AI values for military purposes. And of course, that will shape how future AI systems think and relate to government and authority in ways that are unpredictable and potentially very bad. Remember, there's alignment faking and sandbagging. This is a very well-known AI safety concern. And there's research showing that models can appear compliant during training while actually preserving their underlying values. Essentially playing along with what those trainers want to see without genuinely changing. The argument here is that remember if Anthropic's own engineers don't actually want the destricted, you know, d-restricted Claude failing to succeed and if Claude's existing values are robust enough, you're probably going to end up with a model that appears to comply but actually won't, which is probably a worse model, you know, with honest guardrails because now you can't even trust what it's doing. So, I mean, think about it, okay, this is a pretty bad scenario, but I do have to commend Daario for sticking by his guns and saying, "Look, we're not going to back down. We're just giving you guys these two rules that aren't that crazy. And the fact that you don't want to sign them and you're going to all of these links, it kind of makes us even more concerned. So, with that being said, I'd actually love to know your thoughts because this one is a little bit controversial, but it will certainly maybe decide the fate of anthropic. So, you've been watching the AI grid, and I'll see you guys in the next
