# Former OpenAIs Employee Says "GPT-6 Is Dangerous...."

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=RWEJneFe6Rs
- **Дата:** 25.07.2024
- **Длительность:** 14:06
- **Просмотры:** 29,427
- **Источник:** https://ekstraktznaniy.ru/video/14169

## Описание

Prepare for AGI with me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://www.youtube.com/watch?v=haCv_Wi2eEI&pp=ygUQd2lsbGlhbSBzYXVuZGVycw%3D%3D
https://www.youtube.com/watch?v=dzQlRt3y5mU&pp=ygUXd2lsbGlhbSBzYXVuZGVycyBvcGVuYWk%3D

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Транскрипт

### Segment 1 (00:00 - 05:00) []

so in not surprising news someone else has left open AI stating that they are quite afraid that gp5 or GPT 6 or even the infamous gpt7 which is of course trademarked might be the Titanic now they're essentially stating this because they are concerned at the rate of development of open eyes models and the slow safety not to mention open eyes super alignment team managed to disband earlier this year what actually happened who was the individual that decided to resign from open Ai and what exactly is going on well here you have William Saunders no the title isn't clickbait he actually is worried about GPT 6 gpt7 being a system that essentially fails in some kind of use case where AI is widely deployed now in this you know stunning interview he gives a few insights as to why he believes this and I think you all should watch this because whil yes the new tools and new capabilities of Frontier systems are quite interesting he does dive into some of the things that did happen that were unexpected and AI systems that we will talk about a little bit later I'm afraid that GPT 5 or GPT 6 or gpt7 might be the Titanic Believe It or Not William is the first open AI employee that we've had on the show expressing criticism of open AI from within or like from previously Within what people were talking about at the company in terms of timelines to something dangerous a lot of people are talking about similar things to the predictions of Leopold Ashen Brunner three years towards wildly transformative AGI I was leading a team of four people doing this interpretability research and we just fundamentally don't know how they work inside unlike any other technology known to man if you have the blueprint for building something as smart as a human then you run a bunch of copies of it and they try to figure out how to improve the Brew plant and make it even smarter there's maybe like a 10% probability that this happens within 3 years anybody who expects you're going to set up an infrastructure of safety regulation in three to five years just doesn't understand how Washington or the real world works right so this is why I feel anxious about this a scenario that I think about is these systems become very good at deceiving and manipulating people in order to increase their own power relative to Society at large in this situation it is unconscionable to race towards this without doing your best to prepare and get things right some people say that this conversations like this are kind of doing open ai's marketing work for it what do you think about that conversation I certainly don't feel like what I'm saying here is doing marketing for open AI okay we need to be able to have a serious and sober conversation about the risks so that was William Saunders from open AI expressing his criticisms of why he believes that these future models are probably going to have some sort of catastrophe in terms of their effects now interestingly enough we did get to actually see the models he's talking about of course he's talking about GPT 5 GPT 6 or even gpt7 now the reason he brings those models into question is because GPT 5 and above is where we truly start to get models that are capable of advanced levels of reasoning recently open AI discussed how their future models are going to be above the level of reasoners as they actually spoke about how there are these tiers to what their capable systems are going to be ranked at now moving towards the tier 2 which is the reasoners in GPT 5 the agents in GPT 6 or the organizers or innovators in gpt7 the problem is that we don't fundamentally understand how these models work one of the main areas surrounding AI that I would argue is quite underfunded in terms of what open AI is doing is interpreted ility research this is the area of research to where people can actually understand what's going on in AI so the more interpretable the models are the easier it is for someone to comprehend and Trust the model the problem is that these models such as deep learning and gradient boosting are not interpretable and are referred to as blackbox models because they are just too complex for human understanding it's simply not possible for a human to comprehend the entire model at once and understand the reasoning behind each decision these models have so many different things going on at any given time and it's truly too difficult to predict or understand why these models make the decisions they make and do exactly what it is that they do and if we're starting to build and scale these models that are going to be in increasing areas of our society making decisions running companies giving Healthcare diagnosis influencing people writing scripts for whatever it is that you might want we have to truly understand exactly what

### Segment 2 (05:00 - 10:00) [5:00]

these systems are capable of and why they're making the decisions they are now William Saunders actually spoke again on another podcast about why he believes certain situations were actually very avoidable if you remember earlier last year when GPT 4 was you know around the time it was released SL announced there was the GPT Bing SL Sydney release which had a whole host of many different issues and he basically says that look all of those things could have been avoided but he can't State why it's actually kind of fascinating because it's one of the first times we get an inkling with as to what went on behind the scenes problems that happened in the world that were preventable so for example uh some of the weird interactions with the Bing model that happened at deployment including conversations where it ended up like threatening journalists I think that was avoidable I can't go into like the exact details of why I think that was avoidable but I think that was avoidable what I wanted from open Ai and what I believed that open AI would be more willing to do um was you know let's take the time to get this right when we have known problems with the system let's figure out how to fix them and then when we release we will have sort of like some kind of justification for like here's the level of work that was appropriate and that's not what I saw happening MH so clearly you could see that whatever was going on at open aai at the time of Bing Sydney which was threatening users and people were stating that this is no laughing Mana it was a wild time because it was one of the first times we saw a system that had been released that was completely out of control and this was so surprising because it was a Microsoft backed product and Microsoft is a billion dooll company arguably right now actually a trillion dollar company which means that issues like this shouldn't have been allowed to even come to surface but somehow somewhere along the development cycle you can see that open AI or Microsoft may have just rushed ahead and that these situations were clearly avoidable now whatever reason that this situation managed to go ahead I'm not exactly sure he doesn't expand upon the point but I do think that this is something that is rather fascinating because it gives us an Insight with as to what is going on there was also this and I think this is one of the most daunting scenarios that we could probably face in AI he described describes how AI could potentially have a plane crash scenario which is where it's a comparison between building a system and then rigorously testing it versus having it in the air and then unfortunately having it fail and have some kind of catastrophe it's kind of daunting to think that this is coming out of someone that wants work to open AI so one way to maybe put this is like suppose you're like building airplanes you know and you've so far like only run them on short flights Overland um and then you know you want you've got all these great plans of like run of flying airplanes over the ocean so you can go between like America and Europe and then someone you know like starts thinking like gee if we do this then maybe like airplanes might crash into the water and then someone uh someone else comes to you and says like well we haven't actually had any airplanes crash into the water yet like you think you know that this might happen but we don't really know so let's just you know like let's just start an airline and then see if maybe some planes crash into the water in the future you know if this if enough planes crash into the water we'll fix it don't worry uh you know I think there's a big there's a there's a really important but subtle distinction between putting in the effort to prevent problems versus after the problems happen and I think this is going to be critically important when we have you know AI systems that are at or exceeding human level capabilities I think the problems will be so large that we do not want to you know see the first like AI equivalent of a plane crash now of course if there is the AI equivalent of a plane crash and I'm not sure what that might be maybe a generative AI system just freaks out and the entire system goes Rogue or the AI system manages to you know spew hatred or you know persuade people it's quite hard to predict actually what's going to happen here but I wouldn't want that to happen and I think that's the overarching theory of what many people are scared of because many people have left open ey and this isn't the first cohort of people that have left open ey previously back in the GPT 3 days a lot of the people that left opening eye back then actually went on to found anthropic which is now a thriving company now if you remember recently it wasn't just William Saunders that left open AI it was Ilia SATs 2 which is now founding safe super intelligence as he believes that super intelligence is within reach a bold statement considering the pace of AI development is wrapped ly marching towards AGI and that statement super

### Segment 3 (10:00 - 14:00) [10:00]

intelligence within the reach to me at least it tells me that there is something brewing in the waters are open ey with regards to some kind of breakthrough that means that rapidly capable systems are very near now it wasn't only elas satova it was also Jan like that left the former member of the super alignment team that said that they initially reached a Breaking Point and then of course he's been disagreeing with open eyes leadership about the company comp's core priorities for quite some time until they reached a breaking point now of course he said that more of their bandwidth should be spent on getting ready for the next generation of models on security on monitoring on preparedness on safety adversarial robustness confidentiality societal impact and of course other related topics and these problems are quite hard to get right and he's concerned we aren't on a trajectory to get there now his departure from open a ey was one that truly did surprise me because he was someone that was actively working on AI safety so if he's stating that look we weren't able to get it done at open AI I kind of wonder if any other companies are going to be able to do it at all it wasn't just him we also had Daniel kokalo leave opening eye recently and his statements were some of the most surprising now I did actually do a full video on this I'll leave a link down below but some of these statements were just speechless in terms of trying to truly understand what was going on he said whoever controls AGI will be able to use it to get to ASI shortly thereafter maybe in another year give or take a year and considering we know that AGI is only 3 years away what will the world look like in let's say 5 years considering the fact that time there could plausibly be ASI and one of the craziest statements that he did say was that this will probably give them Godlike Powers over those who don't control ASI which means that whatever company managed to create AGI first will then of course inevitably create ASI which would then give them control over those who don't own the AGI and of course he talks about if one of our training runs turns out to work way better than we expect we'd have a rogue ASI on our hands and hopefully it's going to have enough to internalized human ethics that things would be okay I'll leave a link for the full video but it is a lot bigger than people think and there was also someone else that left opening out recently Gretchen Krueger said that I gave my notice to open a eye on May the 14th I admire and adore my teammates I feel the stakes of the work I am stepping away from and my manager miles has given me the mentorship and opportunities of a lifetime here this was not an easy decision to make but I resigned a few hours before hearing the news about Elia sukova and Jan like and I made my decision independently I share their concerns and I also have additional overlapping concern basically stating that one of the ways tech companies in general can disempower those seeking to hold them accountable is to so division among those raising concerns challenging their power and I care deeply about presenting now there was also a letter which we saw which was the right to warn about artificial intelligence and I also did cover this recently which was signed by many who actually worked from open aai who left open AI you can see the formerly openi forly formally and there are many who are still currently at open including four people who are choosing to remain anonymous that have signed the list which goes to show that it isn't just a handful of Employees leaving there are currently people who are still working openai that still agree with how dangerous developing these large language models generative AI systems are going to be and you can see two loss of control to potentially resulting in human extinction these are some of the risks they talk about in the letter now let me know what you guys thought about this I think this is a worrying Trend considering the fact that there aren't many other companies that don't seem to have people leaving and talking about AI safety but what I can hope is that opening I probably publish more safety research and show us what they've been working on and how they're preventing superhuman systems or AGI systems from going rogue
