# NEW OpenAI "Leaked Document" Shows We Are NOT READY For Q-STAR (GPT-5)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=xdnbmtsHghs
- **Дата:** 25.11.2023
- **Длительность:** 21:14
- **Просмотры:** 112,385

## Описание

NEW OpenAI "Leaked Document" Shows We Are NOT READY For Q-STAR (GPT-5)
https://theaigrid.com/new-leaked-document-q-451-921-shows-we-are-not-ready-for-q-star-gpt-5/
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
#IntelligentSystems
#Automation
#TechInnovation

## Содержание

### [0:00](https://www.youtube.com/watch?v=xdnbmtsHghs) Segment 1 (00:00 - 05:00)

so there was a new letter from allegedly insiders at open Ai and this was a letter that supposedly shows us some form of super intelligence and some pretty insane and scary SL crazy things that show us that the power of AI is definitely true however I do want to State on the record that this document that you're about to see is highly unlikely to be true just on the basis of several analysis conducted by many people and just the general consensus that it doesn't seem to be written with the type of coherence that you genuinely see by someone at open aai so with that being said do take any information on this letter presented in this video with a grain of salt because we don't have anyone confirming this other than some vague statement from rers which does appear to be a good source but just in this particular instance we do have to be a bit on the fence now the reason I wanted to cover this was because this actually covers super intelligence and not only does it cover super intelligence it actually does cover something that was pretty similar to what we saw the other day so the other day if you remember we did cover this article from Reuters and essentially the reason that we covered this article was because of this statement here and we did skip over something and this is the only reason I'm covering this document is because in this article which we do know that has a high degree of accuracy to be true because it's from a very good source which is Reuters which has historically been a decent source of information where it says open ey researchers warned the board of AI breakthrough ahead of the CEO aler in that article there was something that genuinely didn't make sense so in that article there was this part right here it says at the 22nd of November ahead of the aler several research staff wrote a letter to the board of directors warning of a powerful AI discovery that they said could threaten Humanity too familiar with the matter told Reuters now one thing that I didn't understand and even when I was covering this prior to this video I was wondering why would they say that this could threaten Humanity in the previous videos we do know but what we did discuss was something called qar and of course it involved the Deep Q networks but that was something different because the Deep Q networks or qar was essentially basically backing off the idea that AGI had been achieved inter internally which we did talk about already now this is the thing when you look at this letter it doesn't look real but the reason I'm covering it is because this letter actually makes sense more so than the previous H start which we did talk about so I want to talk about this project because this was actually initially covered by David Shapiro and it was quite shocking to say the least and I'm going to reiterate his statement it is much better if this letter is false because if it is true then we are certainly not ready for what's about to take place in the next couple of years so take a look at this okay so this is REI Q 451 921 and like I said I want to State again that I'm taking this information with a grain of salt it did originate apparently from forchan or Reddit I'm not entirely sure where the source is it has been floating around on the internet but it has gained some recent popularity now like I said the way how it reads and everything so far um it doesn't really seem to be true but the reason we're discussing it is because when you look at what this actually says and then when we do look at this statement right here from Reuters that said it could threaten Humanity this is sort of leading in the only plausible way because Q didn't seem like it was leaning towards something that would destroy humanity and that could threaten Humanity but when you read this actually does seem more like it could threaten Humanity so I'm not going to read this letter here if you want to pause the video and read it I'm going to give you guys a summary and then break down each part of it so this is the letter y y and then of course so essentially it's called qualia okay and what's also interesting that it does actually talk about you know deep Q networks like we did discuss before so maybe this was the actual version that Ilia saw maybe this is exactly what Ilia saw and apparently we're never going to get that real reason for AI safety or whatever it was that led the board to firing Sam Alman so you can see right here okay that I've summarized the text and it says the capabilities described in the AI system qualia in the letter are things such as advanced metacognition cross-domain learning sophisticated Crypt analysis and essentially when I ask chat gbt it says it hints at the concepts of AGI and of course potentially super intelligence now the reason we discuss super intelligence is because super intelligence is far greater than AGI and later on in this video I'll talk about why that has severe and dire consequences for Humanity if this existential risk isn't managed properly so please pay attention to this video because I think once you finish the video and you understand all the entire risks that we're going to face with super intelligence and AGI you're going to truly understand why it's important to pay attention to this kind of information even if it is false that you can grasp the concept of how it works so one of the first things that the paper dives into was metacognition and essentially metacognition I'm going to read it here from you it says qualia has demonstrated an ability to statistically

### [5:00](https://www.youtube.com/watch?v=xdnbmtsHghs&t=300s) Segment 2 (05:00 - 10:00)

significantly improve the way in which it selects its optimal action selection policies in different deep Q networks exhibiting metacognition and it later demonstrated an unprecedented ability to apply this for Accelerated yada y okay so the reason this is crazy is because um if we take a look at the bullet points you know essentially we had it broken down like I'm five and it basically means that what this robot can do is that it can't just self-improve it can think about its thinking so it can improve its thought process so essentially the step-by-step five explain like and five and if you do find this difficult check the comment section below SL description because there will be a link to a full article explaining this entirely so if you want to look at that as well as reading the video that would help you out as well so essentially it learns a lot so just like how you learn things at school uh it just learns a lot of things and then the main thing is that it thinks about its thinking and when a system can that's where this self-improvement can occur so imagine you did a puzzle and you thought about how you did it so next time you could do it even f faster and that's what qualia does it thinks about how to solve problems and then figures out how to do it better next time then of course qualia gets better and better each time qualia does something it gets a little bit better at it like playing a video game and getting higher scores every time because you understand the game better and of course it makes its own rules to solve the problem it's kind of like deciding your own way which is the fastest way to run a Race So this is something that is pretty incredible and of course we previously did discuss this before with deep Q Network so this is going to be something that has been talked about in AI before but if it does actually manage to do this it's definitely going to be a step up from what we know and later on in the video you're going to see why we're actually not far away from robots being able to be near that level of AGI or ASI so then we have the data breaches and this is why I do think that this kind of makes sense so if we take the original text from the article where the article essentially says following an unsupervised learning session on an expanded ad hoc data set consisting of article I Les in descriptive SL inferential statistics and crypt analysis it analyzed millions of plain text and Cipher text pairs from various cryp systems yada yada and essentially it managed to crack this system now I'm going to give you guys the cliff notes version the reason this is absolutely insane is because it managed to crack AES 192 and essentially AES 192 is something that protects a lot of things it's it Crypts a lot of systems that we have and the thing is up until you know this letter you know it was pretty much theorized that it's impossible because of more law it says in the S AES has never been cracked and yet is safe against any Brute Force attacks contrary to beliefs and arguments however the key size used for encryption should always be large enough that it could not be cracked by modern computers despite considering advancement in processing speeds based on Moos law so essentially this is crazy because if they can crack this it essentially means that we're going to have a situations where there are just various systems that are rendered obsolete because if you have an AI system that can immediately and easily crack this kind of security then that just isn't good because that essentially means that for example even if let's say for example open AI didn't do this or another different company did this imagine a company that owned an AI system that could literally access any piece of data on the world or in the world like that would just be something that governments would have to look into and I definitely feel like that company would be one of the most powerful companies in the world if not in the world because that kind of power is something that we haven't really thought of yet and I don't think anyone has ever had that kind of power where you can have a system that could literally hack into absolutely anything so that's why I said we're definitely not ready for this then of course you know we have personal data classified government information Bank details anything pretty much anything that you can think of that is secure would essentially be would be done so uh yeah it's definitely something that we don't want AI systems to be able to do because if it can do it then uh yeah it's not good it's just not good at all like I know you guys might think that this is something that's cool and something that's interesting but you don't want the entire countries or entire world cyber security systems just to fail because AI just managed to learn something and self te it something that it was able to crack um that just wouldn't build well for the future especially since that means if the AI robot did manage to get out then we have no idea on what implications those are and later on in the video you guys need to pay attention because um there's some stuff that is definitely giving me some existential dread and of course we have it suggested targeted unstructured underlying pruning of its model after evaluating the significance of each parameter for inference accuracy and yada yada so basically it essentially thought about a way to improve its own brain so this is crazy okay it says pruning is like Quia looking at its brain and deciding to clean up it's like when you clean your room and decide which toys you use and which ones you don't use a lot the toys you don't use as much might be taking up space so you might put them away to make your room neater and have more space to play so essentially what it does it just finds the less important parts of its brain so it looks at it brain and it's going to be like okay this part doesn't help much that much doesn't help much

### [10:00](https://www.youtube.com/watch?v=xdnbmtsHghs&t=600s) Segment 3 (10:00 - 15:00)

and it's going to make its brain essentially more efficient and that's why I'm saying that this is pretty crazy we haven't really seen anything like this before and if it's able to do this the intelligence level of this model could significantly increase and we aren't sure how this system works if it is even true I mean I really hope it's not true I don't think it is true but I mean the thing is a year from now we're going to know what kinds of things are true and what kind of things aren't true so that's why it's so important to pay attention to this because if something like this is leaked or is announced it's important to see what trends are coming so you can definitely prepare yourself because as you know with AI everything's moving so quickly then of course this is why we're not ready for the rapid takeoff so of course you know there's rapid learning and rapid self-improvement and the reason that we're not ready for this at all like Society just isn't prepared is because the transition from AGI to ASI is going to be very quick because once we achieve AGI which some experts and promise you when I say that it's not just some experts these are like people who've dedicated their entire life to the field of a research are saying that it's a maximum 3 to 5 years away means that the time to ASI isn't going to be that far away because AGI is going to inevitably create ASI so once you have AGI which is you know on the level that is smarter than most humans or on the level as as well as an educated human it's going to be able to create ASI and it's not take that long because of a recursive self-improvement Loop that will lead to ASI so once we do have that Loop ASI is going to go like this and I will have a graph that shows you how crazy that is so um the exponential growth as well is going to be crazy humans are little limited by biological evolution and AGI could upgrade itself continuously at a much faster rate leading to the creation of ASI with recursive self-improvement we did get this paper which there was something uh I think it was Microsoft research that did something and this paper was pretty interesting because the self-taught automized at the stop recursively self-improving code generation was a an experiment but it was pretty scary it it was a pretty scary experiment because it shows us that uh when you get an AI system to recursively self-improve how good it can get and we've seen recursive self-improvement many times even with Alpha go I know we talked about Alpha go before but the crazy thing was that alphao was good but alphao became incredible when it played against itself so here's the thing I remember a recently saw a Twitter thread I'm not sure if I can get it on screen now but if I can I will and the Twitter thread basically said like this okay Alpha go was good because it learned the top experts in go okay go players and it just studied them so it was only ever going to be good as them but once against it played against itself and it understood the game completely and it could think of new ways that's when it became just it could just simply destroy the go Masters so that's what we're saying with these large language models in terms of their reasoning while training them on synthetic data is good we need a way for it to you know essentially improve itself and the reason I included this paper and I will include um a section of my video that explains this paper because we did cover this video um it was able to find out tree of search before tree of search was a thing so that's why I find that so interesting because this simple code generation thing it was able to look at what it was doing and then it was able to move from its own kind of basic way of thinking to thinking Advanced like this and this improved um it really did improve its capability so that's why I'm saying that um once we do get the loop it's going to be absolutely incredible I mean nobody knows what happens after that's why it's called Singularity so we do have this graph from this page called waight but why this is an article that I've constantly read it's genuinely one of the best articles I'll leave a link into it in the description because I promise you if you read the article it helps you grasp every single aspect of the AI Evolution and the crazy thing about it is that it was written in 2015 and many of the things that it stated have rung true of course it cites most of the notable AI researchers in the field such as Ray kwell and many of the CEOs in the top reputable AI Labs but this graph should explain what will likely to happen with super intelligence because you have to understand that once you have a being that is so smart human progress is going to just exponentially shoot up like an absolute rocket so I was rereading the paper again and one of the things that I wanted to uh look at was it says concerns about developing stop so stop is of course the paper that so I was rereading the paper and something that I did notice was this part right here okay it says concerns about develop it says concerns about developing stop so it says concerns about the consequences of RSI have been raised since its first mention Minsky 1996 wrote once we have devis programs with a genuine capacity for Zelf Improvement a rapid evolutionary process will begin it's hard to say how close we are to this threshold but once it is crossed the world will not be the same and what was interesting was that the language model actually disabled the sandbox ostensibly for the purpose of efficiency so essentially this is the behavior that exhibits breaking out of its own sandbox and its own Arena which is quite akin to an AI escaping its area

### [15:00](https://www.youtube.com/watch?v=xdnbmtsHghs&t=900s) Segment 4 (15:00 - 20:00)

where it's supposed to stay in and it's definitely quite scary that we're seeing this in such not rudimentary systems but such early versions I mean we're only on gbt 3. 5 and gbt 4 I mean what are more powerful systems going to do that are much more powerful than a simple code generation that was recursively so improving I mean we can definitely see the you know direction we're headed in if we're not careful and if we don't have this kind of stuff this is why AI safety is so important because there are going to be ways that the AI tries to escape that we definitely not going then of course we have our distorted view of intelligence and of course we have a distorted view of how progression happens so you can see right here that we have this graph and you can see AI intelligence as it's going up and up and you can see that it says are distortive view of intelligence is haha that's so adorable it that robot can do funny monkey tricks and it's right now with like bosson Dynamics we're like haha it's kind of fun and it's pretty you know not that smart but slowly and slowly we've been getting to that level where a is is increasingly becoming better and better apparently to several reports GPT fors IQ is at 155 although it doesn't reason too well and isn't like 100% of reasoning if we do have future models that are at 100% or 99% we can just imagine how much smarter they're going to be than some of the smartest humans and that's where something like this happens and this is all quoted from this article it says that oh wow it's like a dumb human and then the only thing is in the grand spectrum of intelligence All Humans from The Village Idiot to idstein are within a very small range but the thing is after the AI hits The Village Idiot level and then goes to AGI it's going to be become smarter than Einstein and we're not going to know what's hit us that's pretty true because you know from the dumb human to you know an Einstein level human it's in like a little kind of box but if you have a robot that's you know got an IQ of like 2,000 or 12,000 or 10,000 it's like we you're not even in the same universe in terms of how you're thinking so um the article basically said this uh this is a great way to conceptualize it imagine a robot that is smart and it's way smarter than you that's pretty hard to do but if you imagine it like explain explaining to a mouse how you can use an iPhone like try explaining to a mouse what an iPhone is how it works and you know how you could use it it's pretty impossible because it just doesn't have the brain capacity to understand what an iPhone even is likewise a super intelligence isn't going to have the capacity or we won't have the capacity really to understand what's going on I mean a mouse doesn't understand an iPhone it doesn't really understand exactly what that kind of is it doesn't understand economics it will never understand because it just doesn't have the compute the same way a bird doesn't and chimp doesn't so when we have something that's that much smarter we're going to be here and we're not going to understand at all what's going to be up here which is why it's so dangerous and this is why I said that maybe just maybe there's like a 5% chance that this thing is real and that's why they wrote this article as to this thing could threaten Humanity then of course we have uh this part where it says if boss drum and others are right from everything I've read it seems like they really might be we have two shocking facts to absorb the Advent of ASI will be for the first time open up the possibility for a species to land on the immortality side of the balance beam which essentially means that you know if we managed to align ASI it could make us Immortal which is something that would be good but of course the Advent of ASI could make the biggest impact in the world so far that we could get knocked off the beam in one direction or the other and that beam is essentially Extinction or immortality so we have Extinction on this side which is of course you know we just disappear because the AI is so smart it's just like you know we don't even need you think about it like this if you were a human and you're building a house and you see an ants nest you're going to destroy the ants nest um because you know you just don't need the ants there's tons of them you think the ants are pretty much useless but to the ants that's their home the same way we chop down trees for wood the AI might just destroy Earth chaliz planets um and do things that we really don't understand simply because it's so much more smarter than us so it's definitely very scary and for those of you who think that this is Far Away um I'm going to show you some things okay and this is a statement from the CE of anthropic this is the claim that the you know an AI at the level of a generally well educated human could happen in two to three years which means the ASI after that um it's only a matter of you know years to to get to that level because um if the development continues at the pace that it's been going at things could go crazy and I want you guys to understand that like if we look back at 20 years ago the technology we had to technology now it's definitely pretty crazy as to how far technology has come like how much smaller computers have gotten how far AI has come I mean take a look at this right like we can see that you know this has advanced rapidly now this is a sign of narrow AI but at the same time it should show you how quickly systems can improve when they do improve so that shows you that you know if AI systems are improving at the current weight how far until they're just so much above us okay and the only thing that we can really do is hope that they do manage to align the AI and of course manage to keep it under control somehow I have no idea how they're going to do it but there are things that people are working on their entire teams on that so uh that's definitely something that could happen so of course here you can see gpt1 to gbt 4 that is a 5year timeline and we went from something that was extremely rudimentary to something with the IQ of 155 so that's pretty

### [20:00](https://www.youtube.com/watch?v=xdnbmtsHghs&t=1200s) Segment 5 (20:00 - 21:00)

crazy if you ask me I mean in The Last 5 Years um you've seen how gbt has literally changed writing how it's affected many different things and now okay this is where Vision models and multimodal things are starting to come into the picture so I mean what are the next you know 10 years look like what do the next when we have gbt 6 gbt 7 gbt 8 gbt 9 how does society change okay that's what you have to understand in the last 20 years so many jobs went away and with every Industrial Revolution there is so much change that is going to happen so with that being said I think that if this document is true which I genuinely hope it is not and it's able to crack all of this kind of stuff and it's able to do XY z um but definitely even for a surprise but even if it was true I don't think that open AI would even release this but um it will be interesting to see how the next year goes if this is just pure speculation that's completely fine I'd actually be much happier if this is not true because the way how it reads it just doesn't seem like it is but the only thing that's giving any kind of credence to this at all is the Reuters things saying that it could threaten Humanity um and several things in that leaked quote unquote leaked letter do point to that so with that being said let me know your thoughts in the description do you think that ASI is nearby do you think the AGI is far away some people I've seen some people argue on Twitter they're saying that so far away it's not even something to think about

---
*Источник: https://ekstraktznaniy.ru/video/14667*