# OpenAI & Google Just JOINED FORCES - Staff Demand “No Killer AI”

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=SDKr_qh30lo
- **Дата:** 28.02.2026
- **Длительность:** 12:34
- **Просмотры:** 10,498
- **Источник:** https://ekstraktznaniy.ru/video/11884

## Описание

🎓 Learn AI In 10 Minutes A Day - https://www.skool.com/theaigridacademy
Get your Free AGI Preparedness Guide - https://theaigrid.kit.com/agi
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Learn AI Business For Free AI https://www.youtube.com/@TheAIGRIDAcademy



Links From Todays Video:
https://notdivided.org/

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Транскрипт

### Segment 1 (00:00 - 05:00) []

So, Google and OpenAI recently just signed a letter and we have to talk about it. So, it seems like other companies are starting to agree with Anthropic and see just how crazy the United States government is in terms of their ask for military usage of AI. So, there is this open letter that was recently just published and it's been, you know, going viral on Twitter. Well, not that viral, but I'm making a video on it because I think it's rather important because in this entire situation, if you haven't heard, Enthropic and the United States government are essentially at odds because they want to use Claude for military usage. And Enthropic is like, you can do it, but there are certain things that we're just not going to allow you to do. Now, essentially, this open letter here is from Google and OpenAI. You can see it says, "We are the employees of Google and OpenAI, the two of the top AI companies in the world. We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permissions to use our models for mass domestic surveillance and autonomously killing people without human oversight. So remember this is coming from two of the most powerful AI companies in the world with the top minds, top researchers and top talent. And currently you can see here the signatorries. So Google has 209 all of whom which are current employees. OpenAI has 64 which all of whom are current employees. And this is just at the time of recording this video. It's quite likely that as time goes on more and more individuals are going to be signing these things and they're going to side with what's going on because I think morally you can understand right from wrong and if the tech isn't there yet it doesn't make sense to continue. Now, this is pretty surprising because there were talks ongoing with Google and OpenAI. But of course, when you have the United States government, the Department of War threatening to invoke the DPA Act, invoke the Defense Production Act to force Anthropic to serve their model to the military and tailor that model to the military's needs, label the company a supply chain risk, all in realation, but Anthropic to sticking to their red lines, not allowing models to be used for domestic mass surveillance and autonomously killing people with oversight. That is pretty crazy. Now remember, if you want to look at where these companies are in terms of the negotiations, OpenAI are, you know, currently still in talks and it's clear that there's probably going to be maybe some kind of power struggle. So you can see here it says that Grock, Google, Gemini, and OpenAI's chat are all available in the military's unclassified systems and Google and OpenAI have also been in talks to move over into the space. But it talks about that the Pentagon is moving up these notations as it is probably going to sever its relationship with Anthropic. It said one source said the Pentagon had reached out to OpenAI to reignite talks with a new sense of urgency, although the parties were still not close to getting a deal due to the complex issues at play. The second source confirmed the Pentagon's outreach had been accelerating. So, it's pretty clear that OpenAI and these other companies, they're starting to talk to them more because they're like, "Look, we do want to be able to have something good here in terms of technology wise, but if Anthropic are not going to be able to do that for us, then we're going to have to seek other companies. " Now, of course, Google are there as well. And what's interesting was it also spoke about the fact that Google are closer than OpenAI. The defense official disputed the characterization of Google as much closer than open AAI to a deal saying talks were ongoing with both and the department believes both will sign the agreements. However, administration officials both insist Open AI and Google will have to agree to the all lawful uses purpose criteria and that is of course the point of contention because Anthropic are not going to sign that and that all lawful uses is where things start to get murky. It says one of the sources said it was not clear if OpenAI would agree to that standard, but they're talking. So, it seems like Google is more likely that they would sign, but OpenAI, some of those researchers, maybe some of the people there are like, look, maybe we just really should hold off on this because Anthropic have already set the gold standard. Now, the all lawful uses thing is basically like where the government can essentially decide what is a lawful use. But of course, that becomes very murky because, you know, things change. they could decide something you know that isn't really lawful is lawful and then of course those AI researchers you know moral people will have contributed to something that is pretty immoral and that's of course not what they want so it's going to be pretty tricky to see exactly what occurs now you can see here that the open letter talks about the fact that you know when they are still debating with Google and open AAI they're trying to divide each company with fear that the other one will give in and it says that strategy only works if none of us know where the others stand. And it says that this letter serves to create a shared understanding and solidarity in the face of pressure

### Segment 2 (05:00 - 10:00) [5:00]

from the department of war. So you can see here they're basically saying that look we know that what's going to happen is they're going to come to us and they're going to say look these other companies are signing on. They are you know getting these huge military you know budgets. They're getting all of these you know certain benefits. Why are you guys not going to sign with us? And of course, you can see the thing here is that they're like, "Look, all of us companies, you know, a lot of the researchers have been at all of these other companies, a lot of these guys know each other, and they're like, "Look, okay, you can try and divide and conquer us, but that simply isn't going to work because we are going to band together in this, and put our foot down and say, look, the tech isn't there yet, and we simply cannot offer you what you want because it would be reckless and pretty much irresponsible. " So when you actually take a look at what is going on here, I think it is pretty remarkable that you know these companies have actually decided to band together on this because sometimes you know you could argue that sometimes people wouldn't care. They wouldn't really be worried about this. They would just put this on the back burner. But I think you know we have to really respect Anthropic because I don't think that if Anthropic had such a value where they were like look we are not going to you know do this back down here then it would have been a situation where other AI companies may have felt that okay maybe this is okay and I think that is so important that Anthropic have led the charge because now you've got other AI companies that are also going to say you know what no we're not going to be able to do this we don't have to back down and even if the worst case scenario happens, maybe the government will realize that we should actually work with those companies rather than actually working against them. Now, what's interesting here, and this is one of the most interesting things, and I don't know what people are going to view about this. I don't know if people care. are just Elon Musk pro to the point that where, you know, if he makes a controversial decision, you know, you don't really care about it. You can see that it says Musk's XAI and Pentagon have actually reached deal to use Grock in classified systems. So, I mean, this is not completely confirmed when I say this, but it looks like, you know, currently Grock and Elon Musk are going to just sign off on that deal regardless and allow all the awful use. Now, of course, some people are going to have controversial opinions. I mean, it's completely up for grabs what you're going to say here, but I think it is a little bit concerning that, you know, XAI, so many people have left. Elon Musk signing this deal with the government, especially when all of this contentions going around. It would have been great if you know these companies are actually all sticking together. But it seems that XAA may not be in that same room. So it will be interesting to see how XAI fares. But I think this is probably one of the most important letters because at first when it was just Anthropic, you have to understand that okay, it's just one company. Yes, we can use Gemini. Google. But now when you have two of the top AI companies in the world that are saying look, we are not going to stand by and just let you guys do this. We have to stick by these values. I think this is actually going to make the department of war consider what they are requesting because if they do this draconian thing where they enable them a security risk and then they do impose those restrictions where they have to you know force Anthropic to hand over their systems or maybe even if they say look Google and OpenAI we're going to force you guys to do it too. That is going to be a pretty horrific scenario. And one of the key things I spoke about in the previous video was the fact that even if they do decide to invoke that act, okay, that forces these companies to hand stuff over, it will not go well because those individuals working at those companies may not want to work there anymore. And those AI systems, we already know that they may sandbag those operations and how they're going to be internally. AI systems are extremely complex and it's going to be really interesting to see how this entire situation does pan out. So genuinely this is probably one of the most interesting times because either Anthropic is going to get forced into a situation where they have to hand over Claude and I don't know how Anthropic are going to deal with that. They probably might have to go to court. It's going to be a hot mess. It's probably going to slow down the rate of production which wouldn't be of course good. Or maybe all of these three companies are going to sign this thing together and they're going to say, "Look, all three of us are not going to budge on this and the government is going to have to just say, "Look, maybe your requests aren't that crazy. Maybe we can just work together to actually find out how we can actually get the systems to a point where they're good enough to not create real issues where innocent lives get killed. I think that is not that much of a crazy scenario. I mean, the situation just seems to get blown out of proportion because the government are just not backing down. Now, something I also found that was pretty interesting was Jeff Dean, who is a chief scientist at Google DeepMind, was on Twitter talking about the fact that he also signed a letter in 2018. I've signed this letter in 2018. It's about autonomous weapons, and I think you should read it again because my position hasn't changed. And so, you can see the letter here. It says, "Leath autonomous weapons pledge. Artificial intelligence is poised to play an increasing role in military systems.

### Segment 3 (10:00 - 12:00) [10:00]

There is an urgent opportunity necess and a necessity for citizens, policy makers and leaders to distinguish between acceptable and unacceptable uses in AI. And in this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. And you can see this had 5,000 signatures. And of course, it's being signed by, you know, Google, DeepMind, all of these other AI companies. Of course, you know, I'm not sure where some of these AI companies are, but the point I'm trying to make here is that when you have the entire, you know, AI space saying one huge thing, it is going to be very interesting to see what happens. It's it talks about the fact that thousands of AI researchers agree that you know by removing that risk attributality the difficulty of taking human lines lethal autonomous weapons could become powerful instruments of violence and oppression especially when linked to once again surveillance data and data systems. Okay. Now of course that's exactly what's going on in today's world. It talks about the fact that lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons and the unilateral actions of a single group could easily spark an arms race that the international community lacks the tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security. So remember this is what is being said in 2018 and literally 8 years later here we are again anthropica like look we are not about to sign off on this do this and when you start to realize that I don't think it's that much of a crazy ask you can see it says that we opt to and hold ourselves to a high standard and we will neither participate in nor support the development of the manufacture trade or use of lethal autonomous weapons and we ask that technology companies and organizations as well as leaders policy makers and other individuals join us in this pledge. So, I mean, here is the clear state. I think it's clear that all of these researchers, you know, people are working on it. I mean, all of these companies that were, you know, here and of course, you mean you can go down the list. It's it's a bunch of people were, you know, talking about this for quite some time. So, it isn't some new phenomenon. And I think it's going to be so interesting to see how the discourse is because on one side the only legitimate claim that they have is that their foreign adversaries may just be blazingly move ahead and just deploy these autonomous weapon systems and mass surveillance for whatever reasons and maybe the United States is left behind. But of course, you do have to get to a point where those systems are genuinely good to the point where human lives are just never at stake. I would love to know your thoughts about this. You're watching the ad grid and I'll see you guys in the next
