# USA'S  NEW 'AI POWERED Fighter Jet  Takes the Industry By STORM! (NOW ANNOUNCED!)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=HDB2S3XMFVw
- **Дата:** 17.06.2023
- **Длительность:** 13:16
- **Просмотры:** 12,075

## Описание

Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
#IntelligentSystems
#Automation
#TechInnovation

## Содержание

### [0:00](https://www.youtube.com/watch?v=HDB2S3XMFVw) Intro

with rapid advancements in AI it's no surprise when a new things constantly gets announced and one thing that many people don't focus on when it comes to the AI advancements is Warfare now Warfare is not something that people do want to focus on but it is a part of technology that is rapidly improving and

### [0:16](https://www.youtube.com/watch?v=HDB2S3XMFVw&t=16s) AI Powered Fighter Jet

on Monday the US Department of defenses research agency DARPA announced that its AI algorithms can now control an actual F-16 in flight so if you don't know what an F-16 is it's a fighter jet and it was first introduced in 1978 and it's now seemingly involved into an autonomous play so it states that in early December 2022 Ace algorithm developers uploaded their AI software into a specifically modified F-16 test aircraft known as the x62a or Vista variable in-flight simulator test aircraft at the air force test pilot school at Edwards Air Force Base California and flew multiple flights over several days the flights demonstrated that AI agents can control a full-scale fight digit and provide invaluable live flight data the upper's air combat Evolution program began in 2019 when the agency began to work on a human machine collaboration in dogfighting it began testing out ai-powered flights in 2020 when the organization had what was called the alpha dogfight trials a competition between different companies to see who could create the most advanced algorithm for an ai-powered aircraft so this Ace programmed the air combat evolution is one of more than 600 Department of Defense projects that are incorporating artificial intelligence into the nation's defense programs in 2018 the government committed to spending up to 2 billion on AI investment in the next five years and spent 2. 58 billion on research and development in 2022 alone this article continues to state that other AI defense projects include making robots and wearable technology and intelligence gathering other articles also state that this program will continue testing flights through 2023 with hopes of developing a working prototype by the end of the year now this honestly comes as no surprise as AI continues to evolve rapidly we are going to see large scale changes across every single industry including Warfare now this wasn't the only autonomous thing that we did see in Warfare which was strikingly similar to what we just saw and honestly some of these game-changing events have taken place under the radar now although AI is going to be affecting everything like we just stated in Warfare it is a bit tricky you see Warfare is something that largely involves human lives now the use of AI in Warfare is to a be more efficient and to be of course reduce the number of human casualties if there are robots on the field then this reduces the amount of humans that have to be there and thus reduces human casualties now what is very interesting was a recent test which was also done with the US Air Force in which they actually deny running a simulation in which an AI drone killed an operator so according to this article by the guardian it goes into details on how an AI drone that was deployed used highly unexpected strategies to achieve its goal so the article starts out by saying that the US Air Force has denied its conduct acted an AI simulation in which a drone decided to kill its operator to prevent it from interfering with efforts to achieve its Mission call Tucker Cinco Hamilton described a simulated test in which a drone powered by artificial intelligence was used was advised to destroy an enemy air defensive systems and ultimately attacked anyone who interfered with that order the system started realizing that while they did identify the threat at times the human operator would tell it not to kill a threat but it got points for killing that threat so what did it do it killed the operator because that person was keeping it from accomplishing its objective so it goes on to say that we train the system hey don't kill the operator that's bad you're gonna lose points if you do that so what does it start doing it starts destroying the communication tower that the operator uses to communicate with the Drone to stop it from killing its Target now of course no real person was harmed and Hamilton who was an experimental fighter pilot has warned against relying too much on AI and said the test showed you can't have a conversation about artificial intelligence and intelligence machine learning and autonomy if you're not going to talk about ethics and AI now the article continues to state that the US Air Force spokesperson denied any such simulation had taken place now whether or not these statements are true it is definitely interesting that AI is going to be taking a part of warfare whether or not we like it so one company that is revolutionizing the way we interact with large language models when it comes to what they're used for is

### [4:23](https://www.youtube.com/watch?v=HDB2S3XMFVw&t=263s) Large Language Model

talented now this company essentially uses large language models to strategically organize strategies on the battlefield now currently what you're seeing is the large language model that is developed by them which kind of looks like something that is similar to chat TBT except this is specifically for Warfare situations so you can see that it starts out with an alert that says anomalous military activity detected then what you can do is you can then prompt this AI assistant to say give me more details you can see using the multimodal capabilities it gives you a summary of what actually happened it shows you five different pieces of military equipment in the range and then you can ask it and prompt it different kinds of thing it shows you exactly what kind of attack is probably going to happen and then it gives us a different Source now of course you can then ask the imagery in the location and ask it for what kind of potential defenses do we currently have it presents two different tasking options right here you can see it says that there's an MQ-9 in the region that can provide full motion video with a full resolution of one meter and then of course it says we've got another satellite in the area which can also give you then you can see right here that the user actually goes ahead and says toss the MQ-9 to capture video of this location now the MQ-9 is obviously a UAV and then of course you can essentially set a tasking request to hand it off to the other commanders and you can see right here it does get approved by this person on the team then we can see here that as the Drone decides to move across following our request we can then see the enemy tank in the area then once we've identified that this is actually real we can then generate three courses of action to Target this enemy equipment so now what we can do is we can then use this AI to generate three different courses of action that will engage in these enemy combatants now what's also interesting about this is that we can then instantly send these off to different members of our team and then you can effectively strategize how to do this effective you can also see right here that they're sending these three options to the commander for review then what's also interesting is that through this you can also see that this large language model has tons of different real-time live private data and what's also interesting is that different members of the group for example different commanders different soldiers different units all have access to different pieces of data which respect the Privacy that these people are working on I do think that this was something that I did get surprised by because I didn't really expect large language models to be the driving force of what would be part of a military's large defense now what I thought was also interesting as well is that this large language model that they do have for military does have some multimodal capabilities for example they do say analyze the battlefield considering a striker vehicle and a platoon-sized unit then the large language model moves on and decides to analyze the terrain and see what is the best course of action where you should Traverse to how you should move and

### [7:03](https://www.youtube.com/watch?v=HDB2S3XMFVw&t=423s) Boston Dynamics

what's going to be the most efficient mode of travel now another thing that people do honestly mention is Boston Dynamics now there's two things that we do want to mention about when it comes to Boston Dynamics and the autonomous AI race because I do think that there is a lot of misinformation out there the first thing is we're going to tackle is the misinformation regarding Boston Dynamics so one of the clips that has been circulating on the internet was this video by Corridor Digital so it looks like a Boston Dynamics robot which we know as spot actually having some kind of weapons and then going I guess you could say Rogue after being constantly provoked now although the clip is definitely fun and it does spark a lot of fear because this is definitely what we are afraid of as humans who have seen many different movies in which the robots always do seem to go against the humans eventually it's not real this was a VFX shot as the corridor digital team are a team of Highly qualified VFX artists that constantly produce videos like this that engage in our nature to share viral content now I've got to be honest the video is definitely very interesting and it showcases potentially what could happen if AI robots do go Rogue but it isn't real the reality of the situation is that Boston Dynamics has actually had robots before that have been used by the military and this was called LS3 now LS3 was a legged Squad support system which was developed by button Dynamics with funding from DARPA and the US Marine Corps it was designed to carry 400 pounds of payload and travel 20 miles without refueling so the S3 has sensors that let it follow a human leader while avoiding obstacles in the terrain and essentially this was just a military support unit now although this video was 10 years ago it's definitely interesting to see that there are no real updates to this kind of software my best guess is that currently although there are many different updates regarding large language models and AI systems Warfare is something that is very different you see when a company or a country produces something that can be used in Warfare It's not advantageous to share that with the world superpowers like the United States spend billions of dollars every

### [8:57](https://www.youtube.com/watch?v=HDB2S3XMFVw&t=537s) Future of Life Institute

single year on Military research and whatever they do find out whatever groundbreaking research they do have it's likely never to be shared with us so whatever AI announcement that we do get from whether it be Boston Dynamics or other companies it's likely that we're going to get these advancements well after they've been either used or deployed now something that is very important and something that I guarantee many people also didn't know about is the future of Life Institute now what you likely did know about was this Paul's giant AI experiment an open letter so essentially if you haven't heard about what this letter was it was essentially a large letter in which many notable figures actually signed Elon Musk had some of the founders of Apple just to name a few now why I'm bringing this up right now is because this same Institute the one which essentially said we want to pause intelligence development because we do believe that any system more powerful than gpd4 is likely to lead to unforeseen circumstances in 2015 this same institution actually did a letter in regards to autonomous weapons and AI so the letter goes autonomous weapons open letter Ai and Robotics researchers autonomous weapons select and engage targets without human intervention artificial intelligence technology has reached a point where the deployment of such systems is practically if not legally feasible within years not decades and the stakes are high autonomous weapons have been described as the third revolution in Warfare after gunpowder and nuclear arms now what's interesting about this was that this was published on February the 9th 2016. now I don't know about you guys but as the artificial intelligence race has definitely grown since then we haven't really seen any major advancements in AI robotics when it comes to autonomous weapons so this goes to show what were they really afraid of back then in 2016 whereas we know now ai is advancing at an Ever advancing rate so the article continues to state that many arguments have been made for and against autonomous weapons for example that replacing human Soldiers by machines is good by reducing casualties for the owner but bad thereby lowering the threshold for going to battle the key question for Humanity today is whether to start the global AI arms race or to prevent it from starting if any major military power pushes ahead with area weapon development a global arms race is virtually inevitable and the end point of this technological trajectory is obvious autonomous weapons will become The Clash of tomorrow unlike nuclear weapons they require no costly or hard to obtain raw materials so they will become ubiquitous and cheap for all significant military powers to mass-produce it will only be a matter of time until they appear on the black market or in the hands of terrorists dictators wishing to better control the populist Warlords wishing to perpetrate certain crimes autonomous weapons are ideal for tasks such as assassinations and many other additional crimes we therefore believe that military air arms would not be beneficial for Humanity there are many ways in which AI can make Battlefield safer for humans especially civilians without creating new tools that could harm human life just as most chemists and biologists have no interest in building chemical or biological weapons AI researchers have no interest in building AI weapons and do not want to tarnish their field by doing so potentially creating a major public backlash against AI that curtails its future societal benefits in summary we believe that AI has the great potential to benefit Humanity in many ways and that the goal of the field should be to do so starting a military AI arms race is a bad idea and should be prevented by a ban on offensive autonomous weapons Beyond meaningful human control the last statement that I will Usher into this video is one of caution you see one thing that you might not have known is that on December the 26 1983 the Soviet early warning system indicated that the United States had launched several ballistic missiles towards the Soviet Union Petrov however doubted the accuracy of the system's information because it only showed a handful of missiles rather than a full-scale attack despite the enormous pressure to report the incident and launch a retaliation strike Petrov made the courageous decision to declare it a false alarm and this actually was later claimed by the additional radar data so essentially someone made a decision prevented an entire nuclear war simply because there was an error in the system and if we did have an autonomous AI that perhaps let's say it made a mistake and it decided to retaliate immediately it would have ended in absolute so I do think that humans are important and although AI systems are increasingly more powerful and increasingly more smarter than humans in some scenarios human oversight is always necessary

---
*Источник: https://ekstraktznaniy.ru/video/14811*