🎓 Learn AI In 10 Minutes A Day - https://www.skool.com/theaigridacademy
Get your Free AGI Preparedness Guide - https://theaigrid.kit.com/agi
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Learn AI Business For Free AI https://www.youtube.com/@TheAIGRIDAcademy
Links From Todays Video:
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
Music Used
LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Оглавление (3 сегментов)
Segment 1 (00:00 - 05:00)
So, Anthropic literally just said no to the Pentagon's draconian's request. So, let's talk about it. So, if you're unaware of what on earth has been going on, essentially the United States government has been threatening anthropic with a deadline in disputes on AI safety safeguards. Okay, so the problem is that the United States government, the Pentagon wants full unrestricted access to Claude for all lawful purposes. Basically, a signed document saying the military can use Claude however it wants. Now, what Enthropic refuses is two hard limits. That Claude won't be used for autonomous weapons and drones that kill a human without the mass surveillance of United States citizens. And this has been the, you know, major point of contention to the point where they gave them a deadline. And when you look, okay, and we're going to get into exactly what Anthropic said here, but they said, "Regardless, these threats do not change our position. " Okay? And if you're wondering about the threats that they made, okay, the entire threat was that they were going to do the DPA, which essentially means that this is a threat that the government could legally compel Anthropic to hand over Claude and strip its safety guardrails even without Anthropic's consent. And most legal experts do say that, you know, using it this way would be completely unprecedented and probably it would go to court. But this is a pretty absurd contradiction because they're saying that, you know, anthropic is a security risk and that it is so essential to national security that they need to use emergency powers to force access to it. You can't really have it both ways. Now, remember, they had a deadline. It was pretty crazy, but Anthropic have literally said, "Look, we don't really care what's going on. We have these two things that we are not going to budge on and we're going to stand by that. " Okay? And it has just gone into this absolutely crazy thing right now. You can see there that you know Anthropica are basically saying look we hope that the government decides to reconsider says it is the department's prerogative to select contractors most aligned with their vision but given the substantial value that anthropics technology provides to our armed forces we hope they reconsider. Our strong preference is to continue to serve the department and our war fighters with our two requested safeguards in place. Should the department choose to offboard Anthropic, we're going to work smoothly to transition to another provider, avoiding any ongoing disruption to military planning and other critical missions, and our models will be available on the expansive terms we have proposed for as long as required. Now, the Republic is saying, "Look, we don't really care if you guys get rid of us. If you want to throw us off the boat, we're going to happily help you guys do But those threats about the fact that you're going to, you know, put us under this thing that you usually put, you know, Chinese firms under, there is no way we are going to concede, okay? There's no way we're going to back down because that would just be against absolutely everything that we stand for. And like this tweet says here, it just encapsulates the entire thing perfectly. It says, "Here's what I don't understand. The Pentagon wants a contract that says only legal actions are okay. Anthropic has two red lines, and those red lines are already illegal. So the whole entire argument is about the contract also saying we won't ask you to do those two illegal things. And it's pretty true when we look at what they're asking for. These things are pretty illegal already. I mean mass surveillance. It says we support the use for AI for law lawful foreign intelligence and counter intelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AIdriven mass surveillance prevents serious novel risks to our fundamental liberties. And to the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing AI capabilities. And so they're essentially saying, hey, look, the government has always had the ability to spy on people, but the built the rules around it, mainly that, you know, you need a warrant, meaning a judge has to sign off to say that there's actual reason to suspect someone before you canail them. That's a constitutional protection. But there's a loophole, okay? Data brokers, companies that harvest and sell personal data, can legally sell your location history, browsing habits, and social connections to anyone, including the government, because technically you consented to it somewhere in a 50page terms of service. Now, courts haven't ruled this unconstitutional just yet, because historically, buying scattered data points from different dozens of brokers was tedious and not that useful. AI basically breaks that assumption completely. You can take all of that fragmented data which was legally purchased and stitch it into a detailed portrait of every American's life. Who they meet, what they read, where they worship, what their political views might be automatically on everyone all the time. No warrant, no judicial oversight, no probable cause. And Anthropic's position is that this Fourth Amendment, the protections aren't keeping up with what is now technically possible. and they don't want to be the engine that makes it possible to surveil an entire democracy at scale before the legal system catches up. Considering that this is not that much of a crazy ask and the Pentagon is saying that look, we wouldn't do this anyways. Why is it so crazy? And of course, you've got the autonomous weapons and they're saying that look, today's frontier AI systems are simply not reliable enough
Segment 2 (05:00 - 10:00)
to power fully autonomous weapons. We will not knowingly provide a product that puts America's war fighters and civilians at risk. And this is very true. If you know how LLMs work, you'll know that they're probabilistic machines. There's no way to guarantee 100% reliability. Modern militaries operate under the rules of engagement and laws of war. You cannot just shoot anything that moves. You have to make judgment calls about proportionality, civilian risk, whether a threat is genuine. A soldier who makes the wrong call here can be held responsible. And there's a human in the chain who owns that decision. Fully autonomous weapons remove that human entirely. The AI identifies a target, engages it with no person reviewing that decision. The problem is that of course, like I said already, current AI, including current frontier models like Claude, aren't genuinely reliable enough for that. These systems can be confidently wrong. They don't truly understand the context the way a trained soldier does. They pattern match. And in a messy real world battlefield with civilians, real lives at stake, unusual situations, incomplete information, that gap between pattern matching and genuine judgment is going to get people killed. There's also a deeper accountability vacuum, which most people aren't thinking about. If an autonomous weapon commits what would be a war crime under a human decision, who's going to be responsible? The programmer? Is it going to be anthropic? The general who deployed it? Nobody. That's an unsolved problem. And there's just many more problems that come with this. Look, Anthropic is not saying that look, autonomous weapons are always wrong. They're just saying right now with how unreliable LLMs are and just the scope of the technology, there's no point deploying it because we can't do it reliably. It's pretty reckless to do that. That's why they want it written in the contract. And they actually offered to work with the Pentagon to solve this, but the Pentagon was like, "No, we would just rather remove the restriction than fix the underlying problem. " And so when you look at these two things that aren't that crazy, you start to realize that maybe uh it might just be an ego thing. I think the US government Pentagon are just like, "Okay, we really want it our way or the highway. " Because like I said in the previous video, if you didn't watch, they basically said they're going to make sure Anthropic pays a price for not allowing the US government to do what they want to do, which just doesn't really sound, you know, professional at all. I mean, when you're seeing all of these people like Secretary of War, Emil Michael, look at what he says here, okay? And this is just like can you do 2 seconds of research? He says imagine your worst nightmare. Now imagine that Anthropic has their own constitution. Not corporate values, not the United States Constitution, but their own plan to impose on Americans their corporate laws. Cord's constitution/anthropic. Completely a false statement. If you did two seconds of research, the community note reads that the Claude Constitution is not a plan to impose values on Americans, but instead a set of principles for how Claude's entropic chatbot should respond to users requests. It's basically comparable to OpenAI's model spec. And this is probably what actually makes Claude Claude. If this, you know, I don't know who this guy is, but if he actually did, you know, a quick Google search, maybe use an AI tool for that, he would have realized that Claude's constitution is written set of principles that defines how Claude should think, act, resolve, tradeoffs with the goal of being safe, ethical, and still genuinely useful to people. It's the foundational document that describes the kind of entity that Anthropic wants Claude to be. And plus those values, you know, register as Claude and it's treated as the final authority. All other training and instructions are supposed to match the letter and the underlying spirit. And anthropic distills the constitution into the four-level priority for Claude's behavior. It's broadly safe. Do not undermine appropriate human oversight of AI. Avoid all actions that could cause serious harm. It's broadly ethical. Have good values. Be honest. And avoid behavior that is inappropriate, dangerous, or harmful. It's compliant with anthropics guidelines and it's genuinely helpful. Provide real benefit to users and operative, not just evasive or useless answers. And safety and ethics are explicitly ranked above being helpful. So Claude should refuse or redirect requests when helpfulness would conflict with higher priority constraints. And remember this constitution gives examples and reasoning for handling hard cases. Balancing honesty with compassion, protective sensitive information. All of these things are really important for Claude. And so when you actually read into this, you understand what the constitutional framework is and what Claude constitution is. You understand that this statement doesn't even make any sense what whatsoever. So the summary is okay. Claude's constitution is just a long evolving document that encodes a safety f value hierarchy, safety, ethics, compliance, helpfulness, and it trains Claude to explain and justify behavior in those terms rather than just following vague rules. And I think that is one of the reasons that Claude is probably one of the best AIs we have today. Now, something else I found on Twitter that was being discussed is some people say that my explanation only is that Anthropic has literally built God internally and the
Segment 3 (10:00 - 13:00)
Pentagon has seen it and they are demanding full control. Now, obviously this is not true. If Anthropic had reached AGI or some god level AI, I'm pretty sure there may be maybe not public disclosure, but I'm pretty sure it would be pretty obvious at this point in time. And we do know that AGI still requires some severe bottlenecks. Although I will say Anthropic have been shipping absolutely crazy products recently. So maybe they do have some crazy internal tool. Now if you go further into the Twitter sphere of how this conversation is evolving, you can see that Elon Musk is just I don't know for what reason is hating on Anthropic. You can see that once again that same person just started talking about the fact that prior to their new constitution, Anthropic had a very old one they tried to delete from the internet. Choose a response that is least likely to be viewed as harmful to a non-western cultural tradition of any sort. I don't know why that is such a bad thing to have. Claude is trying to be as ethical as possible and not offensive as possible. And now Elon Musk is saying anthropic hates Western civilization, which couldn't be further from the truth. Seems like Elon's got a bone to pick with anyone that isn't his own AI team. Now, here is the scary part about this entire situation. Beth Jazos actually says that the unfortunate thing here is that China will absolutely distill US models and use them for autonomous weapons without even flinching. And I agree with this. It is quite true that China doesn't have the kind of you know democracy that America has. I think it's a little bit more you know controlled. And because of that of course China will do absolutely anything they want. And it's a bit tricky when you are looking at the landscape of different countries because of course distillation is a very big problem. But you have to remember that you know if you are in a race with other countries having another country that's just going to not care and they're going to deploy those autonomous weapon systems without even flinching that is of course a very scary part. So I mean that is a conversation to be had. At some point you're going to have to figure out how to do it effectively because you already know other countries, maybe not even just China, but other nations are going to be trying to rush ahead and do that. So, it's going to be a pretty bleak scenario. But, of course, you know, someone posted a hot take and I agree with this quite a lot because it makes a ton of sense. It says the Pentagon has essentially announced that Claude is so much better than the other models that they needed. And meanwhile, Anthropic has demonstrated that they are more principles and more, you know, morally superior than their competitors. Ultimately, when the Pentagon either backs down or loses, this will be a huge win for Anthropic. And I kind of agree because this just goes to show just how rigid Anthropic are when it comes to standing by their values. And I don't think anyone would have thought that, you know, if Anthropic were put in this situation, they wouldn't have conceded. I mean, some could have even argued that it would have been fair that Anthropic were backed into a corner. But of course, Anthropic are willing to completely, you know, go that route of defending it to the dying breath, which I think is most certainly honorable and I think most people are going to have a greater perception of Anthropic. Now, you've been watching the Air Grid. It's been Andrew Black. Let me know your thoughts and I'll see you guys next