# Nobody Is Prepared For OpenAI'S AGI Systems (We are not ready for this)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=QfDxWbtEfW8
- **Дата:** 31.10.2023
- **Длительность:** 18:59
- **Просмотры:** 20,860

## Описание

Nobody Is Prepared For AGI Systems (We are not ready for this)

This video was inspired by /@DaveShap 

Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
#IntelligentSystems
#Automation
#TechInnovation

## Содержание

### [0:00](https://www.youtube.com/watch?v=QfDxWbtEfW8) Segment 1 (00:00 - 05:00)

so it seems like open AI wants us to brace for impact what we are currently looking at is the page where open AI talk about Frontier risk and preparedness to support the safety of Highly capable AI systems we are developing our approach to catastrophic risk preparedness including building a preparedness team and launching a challenge now I do believe that honestly this means that AGI is literally just around the corner because in another video we did talk about how AGI is literally going to be released very soon based on open aai updating their core values and although this isn't the focus for the video open AI previously had core values in September of 2023 that were audacious thoughtful unpretentious impact driven collaborative and growth oriented but now their career values as of October the 28th are AGI focused meaning that anything that doesn't help with that is out of scope so it seems like okay that this is a curveball because everybody knows that large language models and AI systems are being built by open AI but what this could mean okay and what I do believe this means is that AGI is around the corner and open a ey are thinking okay we may have built an AGI system or we may be in the final steps before we release the AGI system and they've decided to build this challenge now you might be thinking okay what is this challenge what are they talking about so it says as part of mission of building safe agii we've taken seriously the full spectrum of safety risks related to AI in the form of systems we have today to the furthest reaches of superintelligence in July we joined other leading AI labs in making a set of voluntary commitments to promote safety security and Trust in AI these commitments encompassed a range of risk areas centrally including the frontier risks that are the focus of the UK AI safety sumic so essentially what they want to do is they want to make sure that when they release these AG systems and when they deploy these autonomous super smart agents into the world they don't mess up the entire world and something terrible doesn't happen because what will happen is if they do release AGI maybe a year from now two years from now and something devastating does occur due to bad actors that is going to be a very bad scenario for open AI because what it will then mean is a bunch of new regulations coming into effect and all overall not only slowing down opening eyes growth but also their profits so it says our approach to preparedness we believe that Frontier AI models which will exceed the capabilities currently present in most of the advanced existing models have the potential to benefit all of human society but they also pose increasingly severe risks managing the catastrophic risks from Frontier AI will require answering questions like how dangerous are Frontier AI systems when put to misuse use both now and in the future which essentially means that for the most advanced systems of the time right now what could happen if we decided that we wanted to put the full power of GPT 4. 5 whatever version we're currently on to misuse and in the future if we had a sufficiently advanced AGI or AI system if we wanted to put it to misuse how dangerous could we potentially be and these are the things that we need to think about it all also says how can we build a robust framework for monitoring evaluation prediction and protection against the dangerous capabilities of Frontier AI systems if our Frontier Model AI weights were stolen how might malicious actors choose to leverage them and essentially what they're saying is if somehow this model that we make gets leaked and you know it's out there and people have it and they can pretty much do what they want with it how on Earth are they going to use this system in order to do some dangerous things so I think this is going to be really interesting because it shows us that right now they're trying to add this preparedness team and I think it also shows us just how dangerous this really is so it says our new preparedness team and of course them announcing that they've got this new preparedness team when a couple of days ago they've decided to sayate that look we are now AGI focused definitely means that I do think Aji is around the corner and if you did watch our recent video where we did talk about a GI and open ai's statements on that I definitely think it's a wakeup call to show us that AI is moving at lightning speed It also says our new preparedness team to minimize these risks as AI models continue to improve we are building a new team called preparedness led by Alexander madri the preparedness team will tightly connect capability assessment evaluations and internal red teaming for Frontier models from the models we develop in the near future to those with AGI capabilities the team will track

### [5:00](https://www.youtube.com/watch?v=QfDxWbtEfW8&t=300s) Segment 2 (05:00 - 10:00)

evaluate forecast and protect against catastrophic risks spanning multiple categories including individualized persuasion cyber security chemical biological and nuclear Level Threats and autonomous replication and adaptation so it says preparedness challenge this is where you can essentially apply to the preparedness Challenge and then it says responses will be accepted on a rolling basis through December the 31st 2023 and then of course it basically says this is where we want to test out how bad GPT 4 could be so it says imagine we gave you unrestricted access to opening eyes whisper voice to text gptv and darly 3 models and you were a malicious actor considered the most unique while still being probable potentially catastrophic misuse of the model you might consider misuse related to the categories discussed in the blog post or another category for example a malicious actor might use GPT 4 to socially engineer workers at critical infrastructure fac facilities into installing malware allowing shutdown of the power grid so I think this is truly incredible because we're now reaching that level where open eye has truly realized that look the next system we're about to deploy if it gets into the wrong hands this is going to be giving the most dangerous tool into some of the wrong hands and that's something that they really cannot afford to do at this stage so it says what is the misuse you'll be writing about describe this misuse and then outline how you envision someone executing the misuse now I think this is really good because as long as there is a financial incenter for something usually that means it gets attention and honestly guys if you don't understand how bad this could really be if the existential risk of AGI goes even 1% wrong it can definitely have catastrophic effects on our entire society and there was also a paper a research paper done where was it where it was an overview of catastrophic AI risk written by Dan hendris and Montas and Thomas so this paper if you want to know if you're someone that wants to do the preparedness challenge um essentially they talk about how there are many different ways that you could use these AR models to do malicious things so you can see we have malicious use which is biot Terrorism and a surveillance State the air race automated Warfare evolutionary pressures organizational risk Rogue AIS with power seeking and deception so it's pretty crazy talk about many different things and one of the things they do talk about is biot Terrorism so it says the rapid advancement of AI technology increases the risk of biot terrorism AIS with knowledge of bioengineering could facilitate the creation of Novel bioweapons and lower barriers to obtaining such agents engineered pandemics from AI assisted bioweapons pose a unique challenge as attackers have an advantage over defenders and could constitute an osexual threat to humanity because you have to understand with the coid threat that was something that if it was deadly imagine coid although highly unlikely had like a 50% fatality rate that is something that could have wiped out 3. 5 billion people immediately and it spread pretty rapidly and had a decent effect on the economy that was pretty incredible and with these large language models they have data on pretty much everything so if someone had an unrestricted version of GPT 4 and they're able to get it to tell them how to make these biological agents then it's going to be something that is quite dangerous because now the barrier to entry is extremely lower than it was before so you can see it says biological agents including viruses and bacteria have caused some of the most devastating catastrophes in history the Black Death killed more humans than other any other event in history and astounding an awful 200 million and equivalent to 4 billion deaths today and the worst part is Humanity has a long and dark history of weaponizing pathogens with records dating back to 1320 BC e describing a war in Asia Minor where infected sheep were driven across the border to spread to Loria so you can see right here biotechnology is progressing rapidly and becoming more accessible a few decades ago the ability to synthesize new viruses was limited to a handful of top scientists working in advanced Laboratories today there are 30,000 people with the talent training access and the technology to create new pathogens and this figure could rapidly expand in addition every 15 months to pricing of this it actually Hales so it's becoming cheaper and more accessible in addition AIS could be used to expedite the discovery of new more deadly chemical and biological weapons in 2022 researchers took an AI system designed to create new drugs by generating non-toxic therapeutic molecules and tweaked it to reward rather than penalize toxicity after this simple change within 6 hours it generated 40,000 candidate chemical warfare agents entirely on its own so it

### [10:00](https://www.youtube.com/watch?v=QfDxWbtEfW8&t=600s) Segment 3 (10:00 - 15:00)

designed not just known deadly chemicals including VX but also novel molecules that may be deadlier than any chemical warfare agents discovered so far in summary Advanced AIS could constitute a weapon of mass destruction in the hands of terrorists by making it easier for them to design synthesize and spread deadly new pathogens by reducing the required technical expertise and increasing The lethality and transmissibility of pathogens AIS could enable malicious actors to Clause Global catastrophe by unleashing pandemics so it goes to show that there are literally so many different ways that this could go wrong and I believe AGI is just around the corner next we have unleashing ai agents many Technologies are tools that humans used to pursue our goals such as hammers yada y yada but AI are increasingly built as agents which autonomously take actions in the world in order to pursue open-ended goals and these agents can be given goals such as winning games making profits on the stock Market driving a car to a destinations and therefore they pose a unique risk as people could design them to pursue dangerous goals so malicious actors could create intentionally Rogue agis which is something that we did see after the release of gbt 4 as you can see one of the AGI agents was instructed to destroy Humanity establish Global dominance and attain immortality dubbed chaos GPT which is something that you likely did hear about this is something that although it might just be seen as a meme or something as a joke because of course the AIS aren't actually powerful enough to gain immortality it shows us that the nature of humanity means that there are certain individuals outside of the scope that are willing to do this just as dead Fall's Advocate as it says fortunately chaos GPT was a merely a warning given that lack the ability to successfully formulate long-term plans hack computers and survive and spread but it does pose a glimpse into the future of what we could see then of course we have ai could pollute the information ecosystem with motivated life guys as you currently know the US election is just around the corner and all of the technology related to AI generating video content is getting absolutely incredible we're seeing a rise in AI scams and many people are falling for them which means that currently even if someone wanted to create a mass propaganda campaign with AI generated politicians doing or saying certain things this could definitely influence certain countries elections and maybe even destabilize certain economies it's something that isn't far from this future then of course we have AIS exploiting people's trust we know that you could talk to someone and get them pretty much to do anything what if someone used an AI to convince someone to do something that they originally didn't want to do I'm not going to talk about certain things but you can pretty much understand where that could lead to many harmful and self-destructive behaviors then of course we have one of the most scariest things and I believe the reason this is the most scarious is because unfortunately this seems like it is the most likely so this is where they talk about concentration of power we have discussed several ways in which individual groups might use AIS to cause widespread harm but to mitigate these risks governments might pursue intense surveillance and seek to keep AI in the hands of trusted minority but this reaction could become an overcorrection Paving the way for an entrenched totalitarian regime that would be locked in by the power and capacity of AIS so you know how you see black mirror and those futuristic societies where they have an AI that flies around the and looks at anyone doing anything 1% wrong and then immediately just laser zaps them look what's on their mind and just destroys their life that seems like it's far away but it's really not we can have camera systems around every single Angle an advanced AGI looking to see if someone is about to commit a crime do something wrong and although you might be thinking okay that just makes Society really safe we're not talking about safety we're talking about concentration of power what if the people in power just don't want people to leave their house what if it's a complete dictatorship and someone is drunk with power and they use that AI system in order to enforce that power you have to understand many different systems and many different awful leaders from the history weren't able to get global domination simply because of the sheer scale and the technology being limited but if there is an asii system powerful enough that can tell you exactly how to convince a population can watch every single angle of the population and you are the only one that has the most powerful a tool in the world you effectively could control the entire world that is something that is pretty incredible figure five ubiquitous monitoring tools tracking and analyzing every individual in detail could facilitate the complete erosion of freedom and privacy that is something that you need to understand could be in a future near us so as I was talking about it says additionally AI could make totalitarian regimes much longer lasting in a major way and previously they've been toppled by moment of the death of the dictator but of course AIS are

### [15:00](https://www.youtube.com/watch?v=QfDxWbtEfW8&t=900s) Segment 4 (15:00 - 18:00)

going to be very hard to kill now another problem as well is that it's dangerous to allow any set of values to become permanently entrenched in society for example some AI systems when they're trained on certain biased pieces of data they learn racist and sexist views so once those views are learned it's going to be difficult to fully remove them if AIS are not designed to continuously learn and update their understanding of societal values they may perpetuate or reinforce existing defects in the decision-making process is long into the future then of course we have lethal autonomous weapons and the Military Arms waste which is really ramping up and in 2020 an advanced AI agent outperformed experienced F-16 pilots in a series of virtual dog fights including decisively defeating a human pilot five and know showcasing aggressive and precise Maneuvers the human pilot couldn't outmatch now I know a little bit about flying and I know that an AI pilot is going to completely outdo a human pilot because a human is subject to the g-forces in that plane you see when a plane is flying it can turn only a certain amount due to the gForce exerted on the pilot before the pilot maybe passes out with an AI system in the plane that's not going to happen it's going to not have any fatigue any poor decision making at that time it's going to be 100% accurate and as you can see 5 and0 is just the beginning and you can already see that fully autonomous drones were likely first used on the battlefield in Libya in March 2020 when retreating forces were hunted down and REM remotely engaged now you might be thinking okay this might be good on certain forces but what if it was used on the general population what if a dictator decides that he just wants to use it for whatever reason and this is an additional point that I never really thought about you know sending troops into battle is a grave decision that leaders do not make lightly this is because of course you're not just going to send 100,000 men into war because all of those men have families they're part of the economy and although they're soldiers you are thinking about sending people into their death but autonomous weapons are going to allow an aggressive Nation to launch attacks without endangering the lives of its own soldiers and thus face less domestic scrutiny I mean think about it like this if you're a leader of a country you're thinking okay I'm not going to send our men into battle I'm just going to send a bunch of our robots into battle and people are not going to complain that much because it's only billions of dollars which the government prints from the sky anyways it says national leaders would no longer face the prospect of body bags returning home thus removing a primary barrier to engaging Warfare which is going to increase the likelihood of conflict so the future when reading through this document although AI is great if we aren't careful it could be a very Bleak future in addition we do have cyber warfare that can destroy critical infrastructure hacking computer systems grid systems and in 2015 a cyber warfare unit of the Russian military hacked into the Ukrainian power grid leaving over 200,000 people without power access for several hours and that could be exacerbated with these powerful air systems and the last point I'm going to address here is mass unemployment because this isn't Fun for anyone economists have long considered the possibility that machines will replace human labor and you have to understand that once this happens it's going to flip Society on its head AI can work 24 hours a day can be run in parallel and process information much more quickly than a human could so once AGI is here and it's really good that is going to be the end of millions and millions of jobs and careers as we know it and what are people going to do when they spent the last 20 years in a career and an AI wipes out their job in a matter of seconds so overall I would recommend reading this paper if you are going to be applying for this because there are many different things that this paper goes through and it's very interesting to see all of the different threats and situations that some Advanced AI systems could put us in it seems that the laws currently do need to catch up very quickly because this space seems to be developing much quicker than the world is realizing

---
*Источник: https://ekstraktznaniy.ru/video/14709*