# BREAKING: OpenAI Reveals COMPLETE TRUTH About AGI WIth LEAKED EMAILS (Elon Musk Lawsuit)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=J_QPOwMBA0I
- **Дата:** 06.03.2024
- **Длительность:** 19:22
- **Просмотры:** 34,569

## Описание

✉️ Join My Weekly Newsletter - https://mailchi.mp/6cff54ad7e2e/theaigrid
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

Links From Todays Video:


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=J_QPOwMBA0I) Segment 1 (00:00 - 05:00)

there was actually a recent blog post by openi just around 3 to 4 hours ago and they actually talk about the lawsuit that Elon Musk fil and why it's a pretty baseless claim in this video I'm going to show you all of the secret emails that they've released which showcase why this is pretty incredible and some key things that many people did Miss so make sure you watch this to the end because there's a lot to uncover and it does seem like we are moving to a very crazy time so one of the things that they did start by stating is that of course they want to dismissed the claims they state that the mission of open AI is to ensure AGI benefits all of humanity which means both building safe and beneficial AGI and helping create broadly distributed benefits we are now sharing what we've learned about achieving our mission and some facts about our relationship with Elon we intend to move to dismiss all of elon's claims and this comes at no surprise because if you're going to file a lawsuit against a company like open ey you can expect to all of those claims to be dismissed now the next slide here was a slide that many people did gloss over but it is one of those that I do think you need to pay attention to they state that we realize building AGI will require far more resources than we'd initially imagined it says here that Elon said we should announce an initial 1 billion funding commitment to Opi in total the nonprofit has rais less than $45 million from Elon and more than $90 million from other donors the point here from this is that what they wanted to do which is of course AGI which everybody knows which should be coming in the next 5 to 10 years is something that isn't as easy because it does require vast amounts of compute and a compute is just like the data the gpus it's pretty much all of the systems that help the AI to run more effectively now essentially you can see here that early in late 2015 Greg Brockman and Sam Altman had initially planned to raise only1 million but Elon Musk essentially said that look we need to go with a much bigger number than the $100 million to avoid sounding hopeless I think we should say that we're starting with a $1 billion funding commitment and I will cover what anyone else doesn't provide now at that time okay in late 2015 they realized that look okay we definitely going to need a lot of money to start this company but you can see that by 2017 right here we came to the realization that building AGI will require vast amounts of compute and this is something that they've constantly reiterated so once again not only of course this video isn't about like AGI in the future but not only do opening ey clearly State here that AGI will require vast amounts of compute we do know from their previous research that this is something that they do want because now of course with Sam raising $7 trillion some could argue that $7 trillion number might even be in relation to Super intelligence or an system above artificial general intelligence and if you haven't seen the recent video that I made yesterday it's definitely one that you should take a look at because it's a 40-minute video on how open AI plans to build a human brain that could exceed all capabilities in all arenas that will definitely shock the world if it does come to fruition and I'm going to talk to you guys more about that because it's something that is rather fascinating so it says that we began calculating how much compute an AGI system might plausibly require we all understood we were going to need a lot more Capital to succeed at our mission billions of dollars per year which was far more than any of especially Elon Musk thought we would be able to raise at the nonprofit rise so essentially what they're stating here is that look okay we were going to build the system and we knew we needed a lot more money so essentially what they did was they decided to transition from a nonprofit to a for-profit so you can see right here they state that re recognized a for-profit entity would be necessary to acquire those resources so essentially you know how previously everyone was giving opening ey Flack for having themselves turned into a for-profit entity many people were stating that this was about greed but they're basically stating that look if we're actually going to get to AGI a for-profit entity is necessary okay it's not something that open wants to do so they're stating that look we didn't really want to do this but this is necessary to acquire those resources so they said you know we discussed a for-profit structure in order to further the mission but Elon Musk wanted us to merge with Tesla or he wanted full control and then of course Elon Musk left opening eye stating that he needed a relevant competitor to Google deep mind and that he was going to do it himself he said he'd be supportive of us finding our own path so essentially what happened here was that Elon Musk actually decided to leave opening ey because he realized he was competing against the Behemoth that was Google at the time you have to remember that Google was leading the way in terms of AI research and they were combined with a huge company in fact that might not be true because I'm not sure at what date they acquired them but the point is that these Labs were leading the charge on what AI was going to do now you can see right here that you know Elon Musk leaving open AI at that stage he basically thought that he had to do that because of course they needed more money and they couldn't really figure out how to get the money and come to any kinds of agreement one of the key things that you need to take away from this is that Elon Musk says even raising several hundred millions won't be enough this needs billions per year immediately or forget it so elen was like look we need billions of dollars per year or just completely forget it and opening were

### [5:00](https://www.youtube.com/watch?v=J_QPOwMBA0I&t=300s) Segment 2 (05:00 - 10:00)

like look we can't give you completely control of the company so I'm guessing that they just decided to part their ways and with Elon saying look if this company doesn't want to agree to it then their chances of success are pretty much zero now here was something as well that I saw that most people didn't actually take a look at was the fact that opening I stated on this page that Elon understood the mission and did not imply open sourcing AGI now AGI is mentioned several times in this article but we can all understand here that this is going to be a big question and I will link to an article later on in the video about why this is such a crazy discussion because the future is about to get very crazy it could be for the good bad but you can see right here that says Elon understood the mission did not simply imply open sourcing AGI and as Ilia sukova told Elon as we get closer to building AI it will make sense that we need to start being less open and this right here guys this entire quote I've seen so many people argue about this because this seems to be from the people's side some people are literally stating that this is open a eyes biggest mistake and the open AI are now a distrustful company now I think you can argue many different things in the world but with these big companies with complexity that they do have it's very hard to argue that now essentially what he's stating here is that as we get closer to building AI it will make sense to start being less open okay and the open AI in the name means that everyone should benefit from the fruits of the labor after it's built but it's totally okay not to share the science which Elon replied yep so if you actually want to take a look at that email I'm going to show you guys that email right now because essentially what they're stating here the implications for this are genuinely profound okay so you can see right here this is the letter from Ilia sov it's not a letter it's an email okay this is a letter from Ilia satova to Elon Musk and he's stating that you know congrats on Falcon 9 which is spacex's rocket um you're doing interviews y y and essentially he's stating that you know you're doing a lot of interviews recently ex extolling the virtues of open sourcing AI but I presume you realize that this is not some sort of Ana that will somehow magically solve the safety problem now if you don't know what the safety problem is the problem is that AI is something that is a black box and it's quite akin to Growing some kind of alien in a black box without knowing exactly how it functions and then just hoping that it's going to follow your commands especially when we're on a time scale where this thing is likely to be smarter than pretty much every human that's ever existed that is a giant problem that we simply haven't solved because when we build these initial AI systems that we currently do have they don't even listen to our commands in the way that we want and with the merging capabilities sometimes there are things that we literally cannot predict and when they do happen they catch us by surprise and these things could have unintended consequences so he States here that there are many good arguments as to why the approach you are taking is actually very dangerous and may increase the risk to the world some of the more obvious points are well articulated in this blog post I'm sure you've seen and there are also some other important considerations and we're going to go through that blog post now because this is a blog post that you all need to see if you are in the AI space I do believe that you need to see that because if you don't then I think that whilst you are excited about the AI technology and it's good to be excited about it not understanding the true risks of super intelligence and the AGI is something that I think everyone should understand because allows you to Grapple with the reality of what we're actually dealing with so essentially the blog post here discusses a new organization dedicated developing artificial intellig in a way that benefits Humanity which is of course open AI okay and it first starts by discussing how you know there was this Manhattan Project if you don't know what that was United States want to build these crazy nukes because they knew that other countries were going to be working on them and they decided to work on them themselves and then they managed to get them themselves so essentially the problem was is that they said should AI be open and one of the things that they discussed was that making AI research open could allow irresponsible actors to develop powerful AI without proper safety measures and of course if AI progresses rapidly from subhuman to superhuman intelligence which is a hard takeoff it may be difficult to control and could pose an existential threat to humanity so essentially what this blog post is stating is that the hard take of problem could be accelerated by company like open Ai and the problem is that if there is a hard takeoff that means that things are going to get dangerous pretty quickly and the future becomes rapidly more uncertain than what we currently have and if you think about a hard takeoff would be like let's say for example in January we have AGI and then by February we have an AI system that proves the rean hypothesis and then things get even crazier from there and this article actually does talk a lot about that stuff and I know most of you guys are going to be thinking why are you talking about should AI be open guys the entire Crux of open AI right now is how they've changed over the years and this is what many people are debating whether or not open AI should have done it I've got to be honest with you guys this is a very hard thing to State because on one side you have the argument that yes open

### [10:00](https://www.youtube.com/watch?v=J_QPOwMBA0I&t=600s) Segment 3 (10:00 - 15:00)

AI their initial mission statement was to open AI completely so that everybody can use it and so that you know these larger companies aren't going to have a monopoly but of course over time they've essentially become that Monopoly in Ai and they've changed their entire mission statement but the problem is that with the dangers of artificial intelligence are the potential benefits of an open- Source AI such as preventing a single entity from monopolizing the technology does it seem to justify the risks many people would argue that it doesn't seem to justify the risk because if everyone has access to an AGI level system we're going to get rapidly more Bad actors than we would good actors and it's going to be a system where existential risk is something that is going to be in avoidable now I would state that I'm 50/50 with on one side I do want to side with Elon because it's like look this company was supposed to be open that was their mission statement just to give it out to everyone but I think as we've understood the field more we understand that open sources technology might not be good because if we give this stuff to bad actors it's definitely not going to be a good scenario for example imagine this recent technology that we did get with opening ey Sora everybody knows how good Sora was but the problem is that let's say they open source Sora and everybody and anyone could make an Sora type system that would be incredible for the world which would be good but the problem is that there would be so many Bad actors out there that would make fake videos of stuff to scam people and just have ridiculous misinformation campaigns I think we do owe it to some of these larger companies to ensure that some of the technology is actually kept out of the wrong hands now here's what most people are arguing about okay this is something that Gary Marcus tweeted and I got to be honest okay Gary Marcus's opinions on AI are usually always quite negative so do take this tweet with a grain of salt because this guy literally tweets very negative stuff about AI all the time and I'm genuinely not sure why but he does actually have a point here okay I'm not going to say that they are vendacious company to the very core but essentially you can see here the dates and this is something that has been floating around on Twitter quite a lot it says we're hoping to grow open AI into to such an institution as a nonprofit our aim is to build value for everyone rather than shareholders researchers will be strongly encouraged to publish their work whether as papers blog post or code and our patents if any will be shared with the world we'll freely collaborate with other across many institutions and expect to work with companies to research and deploy new technologies then of course they changed and this was the email that they're referencing and they're stating that look okay as we get closer to building AI it will make sense to start building Less open and the open AI in open means that everyone should benefit from the fruits of the AI after it's built but it's totally okay not to share the science even though sharing everything is definitely the right strategy in the short term and medium-term for recruitment purposes so you can see here that this is essentially where some people felt that there was deception going on by open Ai and they're basically stating that look if openi has released emails stating that you know even though sharing everything is the right strategy in the short term just for recruitment purposes they are stating that openi is going to get a lot of flack for this and open AI are going to come under the public in terms of breaking their initial agreement with what they were meant to do and the fact that you know externally they're stating that look you're going to be strongly encouraged to publish your work but that this was a lie just for the recruitment purposes so I mean it's a very hard thing to discuss because it's something that doesn't have a very simple scenario and this article is definitely worth a read okay I'm going to summarize some of the stuff here but it's really crazy because it actually talks about how the scale is going to move and once you understand the scale of intelligence is something that dictates how things move on Earth you start to realize why open source technology the arguments against it are actually quite good and the arguments for it are also quite good because essentially it states here that if you were to invent a sort of objective Zoological IQ based on amount of evolutionary work required to reach a certain level complexity of Blain structures for example you might put nematodes at a one cows at 90 chips at 99 Homo erectus at 99. 1 and modern humans at 100 and the difference between 99. 1 and 100 is the difference between frequently eaten by lions and has to pass anti- poaching laws to prevent all lions from being wiped out the point here is that they're saying the difference in evolution on this scale okay due to intelligence is so far every time you move up the scale that you have no idea what the next form of intelligence is going to be it says almost all of the practically interesting differences in our intelligence occur within a tiny window that you could blink and miss and essentially they talk about the hard take off that I just previously discussed it's happened before it took Evolution 21 million years to go from cows with sharp horns to homonid with sharp Spears and it only took a few 10,000 years to go from homid with sharp Spears to Modern with nuclear weapons so what that means is that time the timeline is of course accelerating and we know that in the future over the longer time scale things are going to get more and more crazier and more faster as we begin to look back remember it took all of human history from Mesopotamia to the 19th century Britain to invent a vehicle that could go as fast as a human but after that vehicle was made it only took another one 4 years to build one that

### [15:00](https://www.youtube.com/watch?v=J_QPOwMBA0I&t=900s) Segment 4 (15:00 - 19:00)

could go as twice as fast as a human if there's a hard takeoff open AI strategy stops being useful there's no point in ensuring that everyone has their own AIS because there's not much time between the first useful one and at the point between things get too confusing to the model and nobody has the AIS at all and of course the problem with these systems is that remember the classic programmers complaint computers always do what you tell them to do instead of what you meant for them to do fut programs rarely do what you want them to do the first time you test them and this is why we're stating that you know once again the decision to make AI find open source is a trade-off between the benefits and the risks the risk is that in a world where hard takeoff and difficult control problems you get superhuman AIS that hurl everybody off the cliffs and the benefit is that in a world with slow takeoffs and no control problems nobody will be able to use their sole possession of the only existing AI to Garner too much power and so this is a very interesting quote it says Elon Musk famely said that AIS are potentially more dangerous to nukes and he's right so AI probably shouldn't be more open sourc than any nuke should and I think that is a very important point if we're dealing with the most advanced dangerous technology that is ever going to exist why would it be more open- sourced than any of the previous technologies that were such as dangerous and Elon Musk actually did State something on Twitter recently where when open AI actually tweeted this you know we dedicated to the mission y y Elon Musk responded to the Tweet saying change your name now Yan lean actually discusses this email because he said that if you have a certain combination of naivity and self-d delusion you might think that superhuman AIS are just around the corner it wasn't true in 2016 and it's still not true today and if you have a bit of a superiority complex you might think that you'll be the one producing human Ai and everyone else is too stupid to handle it safely and you'd also be wrong now I kind of disagree that superhuman AI isn't just around the corner I do think that AGI isn't far away so as long as the compute is there and the compute does exist and these companies do get the required funding that they do need from whichever company that it is that is investing in them I do think AI is going to surprise everyone and the reason I'm disagreeing here is because if AI has surprised nearly most of the people in the field already I think even Yan lean is probably going to be surprised in the future when certain systems and certain breakthroughs are made because we do know that with technological advances there is always the law of exponential returns and with that being said know that Ray kwells very accurate predictions predict AGI to be 2029 and we know for certain that it definitely could even arrive earlier than that and the problem is that it's a singularity which means that we don't know what happens after that fact even as we've just disc discussed the hard takeoff problem and all the other problems that exist with the problems whatever side you exist on another company is actually developing open source AGI The Meta chief executive Mark zabers has said the company will attempt to build an artificial general intelligence system and make it open source meaning it's going to be accessible to developers and everyone outside the company system should be made as widely available as we responsibly can and that is something that needs to be taken into account here everyone is talking about open AI are the open source yada yada but meta who's allegedly building llama 3 which is on capabilities of GPT 4 and we know that they're going to open source their systems are meta doing more damage than open Ai and are they existing on the right side of the future I think this is one of the most interesting times a part of the AI Community because this is the point where things are starting to rapidly accelerate and I do think with open source Technologies becoming more and more capable in terms of their capabilities we're going to see in the future maybe a year from now 2 years from now 3 years from now whenever this AGI system is developed whenever it is open source we're going to see what actually happens because I can guarantee you there will be at least one incident where AI system potentially does something that does prompt a wider discussion for now it does seem like the arguments between open Ai and Elon Musk are fair on both sides of course openi should potentially stick to the mission with open sourcing the majority of Parts where it can continue to benefit Humanity but of course AGI is a very delicate system and a very dangerous one and if the potential dangers are more dangerous than nukes then I'm not sure that open sourcing it at certain stages might be the best for everyone on Earth but let me know what you do think because we do live in a world where Bad actors are going to do bad things and that's just part of society these people will always exist so I have no idea where this is going to go because both sides of the argument are very important but I do think the future be determined by a few key events

---
*Источник: https://ekstraktznaniy.ru/video/14481*