# Sam Altmans SHOCKING New STATEMENT " NO AGI IN 2024" (Open AI)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=V9_uPASaptQ
- **Дата:** 28.12.2023
- **Длительность:** 22:40
- **Просмотры:** 54,990

## Описание

Sam Altmans SHOCKING New STATEMENT " NO AGI IN 2024" (Open AI)

💬 Access  GPT-4 ,Claude-2 and more - chat.forefront.ai/?ref=theaigrid
🎤 Use the best AI Voice Creator - elevenlabs.io/?from=partnerscott3908
✉️ Join Our Weekly Newsletter - https://mailchi.mp/6cff54ad7e2e/theaigrid
🐤 Follow us on Twitter https://twitter.com/TheAiGrid
🌐 Checkout Our website - https://theaigrid.com/


Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=V9_uPASaptQ) Intro

s samman recently made a statement on Twitter that has some people confused and has a real conversation about AGI so essentially Sam ultman made a statement stating clearly that AGI is not going to be coming in 2024 and this is quite the statement considering the company's past moves and some of the previous developments that have happened so let's take a look into the statement dissect it and look at some of the actual statements that Sam Alman himself has said and let's look into why this is one of the most interesting statements we've seen so far

### [0:34](https://www.youtube.com/watch?v=V9_uPASaptQ&t=34s) Statement

so you can see here that Sam Alman was actually following up to a Twitter thread where he essentially said wow way more requests in the first two minutes for AGI than expected I'm sorry to disappoint you but I do not think we can deliver that in 2024 and essentially this was in response to a thread where he spoke about some of the things that he's going to be focusing on for openi in 2024 some of these things we did discuss in a prev previous video we released yesterday but essentially he just talks about stuff like bet gpts voice mode and of course future models of AI like GPT 5 but you can see that many people were in fact requesting AGI but of course he does say here this is not going to be delivered in 2024 now although this isn't I don't think that it is a big of a deal because some of the other stuff is definitely crazy but one of the things we do need to look into is what exactly is Agi and this video I'm about to show you right here is essentially a video in which Sam ultman Actually talks about what he thinks the definition of AGI is the reason I'm showing you this video is because if you've been around in the AI space long enough you'll know that not even some of the most esteemed AI researchers and figures can agree exactly on what AGI is but that's something we're going to get on to later on the video but for now take a look at what Sam Altman says because it's a very interesting position he holds which means how he makes AGI defines it will determine how it is built how would you define AGI and how do you think you'll know I should have defined that earlier it's a great point I think there's like a lot of valid definitions to this but uh for me AGI is basically the equivalent of a median human that you could like you know hire as a coworker um so then they could like say do anything that you'd be happy with a remote coworker doing like just behind a computer so although that is a very simple definition I think it is one that we need to take into account he described AGI as a simple essentially robot that can do anything that a base level human can do so not that much smarter than the average human but good enough so that I guess it could pass most benchmarks so that is rather fascinating because one thing that we've seen time and time again was that previously anytime we spoke about AI it

### [2:50](https://www.youtube.com/watch?v=V9_uPASaptQ&t=170s) Contrasting Opinions

is very interesting because what tends to happen is every time we do forward we move forward on a new frontier of AI there is constantly a new Benchmark proposed saying this isn't that good and this isn't AGI but I will be showing you guys okay exactly where samman talks further about this now something that I did want to also talk about was are there contrasting opinions because the other day one openai employee actually tweeted brace yourselves AGI is coming so in response to Jan like the head of super alignment at open aai they were talking about their new preparedness framework for super intelligent okay and essentially what they were talking about is how they're going to prepare for any dangerous AI system we actually did do a deep dive on this it's very intensive super interesting but of course this tweet brace yourselves AI is coming from an open a employee and then of course we have Sam Alman saying sorry we can't deliver AGI in 2024 is quite interesting but at the same time what that does mean is that just because we're not getting it in 2024 it doesn't mean we're not going to get it in 2025 and that is why I always say you do need to pay attention to exactly what it said because remember we actually got GPT 4 in 20123 but they actually had GPT 4 in 2022 well that's when it finished training they spent a couple of months doing reinforcement learning and remember that now means that in 2024 GPT 4 is going to be around 2 years old not 2 years from at leas but 2 years since they've had that system which does mean that could we be getting AI in 2020 5 and before you think whoa whoa that's too much hype that's too much craziness AGI is 10 years away take a look at some of the other statements done by other people so that you can see what the general consensus is now of course we also have the other thing where Sam Alman did actually say thanks a lot for some of these Comm requ and then he talks about AI a little patience please you want to take a look at one of

### [4:52](https://www.youtube.com/watch?v=V9_uPASaptQ&t=292s) When AI Will Arrive

the CEOs of anthropic which is another AI startup which is behind the model Claude 2. 1 claw 2 and of course this person Dario amode actually left open AI to start this AI organization because he actually wanted to focus on AI safety but he actually talks about exactly when AI will arrive and take a look at his best prediction I don't know I think we might end up in like a weirder world than we expect when you add all this together like your estimate of when we get something kind of human level what does that look like I mean again it depends on the thresholds yeah um you know in terms of someone looks at these the model and you know even if you talk to it for you know for an hour or so it's basically you know it's basically like a generally well-educated human yeah um that could be not very far away at all I think um like that could happen in you know two or three years like uh you know if I look at again like I think the main thing that would stop it would be if we hit certain certain you know and we have internal tests for you know safety thresholds and stuff like that so if a company or the industry decides to slow down or you know we're able to get the government to Institute restrictions that kind of uh you know that moderate the rate of progress for safety reasons that would be the main reason it wouldn't happen but if you just look at the logistical and economic ability to scale I don't think we're very far at all from that now that may not be the threshold where the models are existentially dangerous uh in fact I suspect it's not quite there yet it may not be the threshold where the models can take over most AI research it may not be the threshold where the models you know seriously change how the economy Works um I think it gets a little murky after that and all those thresholds may happen at various times after that but I think in terms of the base technical capability of it kind of It kind of sounds like a reasonably generally educated human across I don't know I think we might end up in like a weirder world than we so I think that first clip was really interesting because he actually describes AGI as around 2 to three years away and this is around 6 to 8 months ago so remember AGI is just a base system that can do any task on a computer as good as any standard well educated human can so this is also fascinating because it goes to show that AGI might just be closer than some of you do think and although AI might not be coming in 2025 they didn't rule out 2026 or 2027 which means to say that those should be some very surprising years and what's also interesting was that AI is definitely a fast moving pace and we do know that open ey works on tons of different things and if Q start even as a hint of legitimacy to it we could definitely be seeing some further AI developments that maybe aren't even AGI but it definitely profound but then there was also something very fascinating that was discussed on the darar Kesh Patel podcast that I figured you all needed to take a look at is there good enough for AGI or just simply that nobody has tried hard enough it would be hard for me to speak to you know current tech company practices and of course there may be many attacks that we don't know about where things are stolen and then silently used you know I mean I think an indication of it is when someone really cares basically cares about attacking someone uh then often the attacks happen so um you know recently we saw that some fairly High officials of the US government had their email accounts hacked via Microsoft was providing the email accounts um so you know presumably that related to information that was you know of great interest to you know to foreign adversaries um and so it sounds it seems to me at least you know that the evidence is more consistent with you know when something is really high enough value than uh you know than you know someone asks and it's stolen and My worry is that of course with AGI we'll get to a world where you know the value is seen as incredibly high right that you know it'll be like stealing nuclear missiles or something you can't be too careful on this stuff um and you know at every place that I've worked I pushed for the cyber security to be better so like I said that was a fascinating piece from the dwares Patel podcast because of course it goes to show that AGI will possess a huge amount of economical value that may be akin to so much value that people may try to steal it now I do think that stealing AGI is quite impossible but at the same time it goes to show how valuable this technology will be once it is released now AGI arriving is of course going to be a Monumental moment and whichever company is working on that and manages to deliver on that will definitely be a company that is valuable for the next decade and for C future to come but I do think you all should take a look at this

### [9:50](https://www.youtube.com/watch?v=V9_uPASaptQ&t=590s) Sam Altmans Statement

clip from The Lex Freeman podcast where I was previously discussing the benchmarks and how some people don't realize just how good gp4 is and some people even stating that Sam Elman shipped an AGI and people don't even realize it the reason I'm saying that is because if gp4 was released let's say perhaps 5 or 10 years ago some people would instantly say that it is an AGI level system but later on the video I will get into some of the actual benchmarks that you can use to tell what kind of AGI systems are but of course take a look at this clip because samman statement is actually quite someone said to me over the weekend you shipped an AGI and I somehow like am just going about my daily life and I'm not that impressed and I obviously don't think we shipped an AGI um but I get the point and the world is continuing on when you build or somebody Builds an artificial Journal intelligence would that be fast or slow would we know what happening or not would we go about our day on the weekend or not so I'll come back to the would we go about our day or not thing I think there's like a bunch of interesting lessons from covid and the UFO videos and a whole bunch of other stuff that we can talk to there but on the takeoff question if we imagine a 2x2 matrix of short timelines till AGI starts long slow takeoff fast takeoff do you have an instinct on what you think the safest quadrant would be so uh the different options are like next year yeah say the takeoff that we start the takeoff period y next year or in 20 years and then it takes one year or 10 years well you can even say one year or 5 years whatever you want for the takeoff I feel like now is um is safer so I'm in the longer now I'm in the low takeoff short timelines is the most likely good world and we optimize the company to have Maximum Impact in that world to try to push for that kind of a world and the decisions that we make are you know there's like probability masses but weighted towards that and I think I'm very afraid of the fast takeoffs I think in the longer timelines it's harder to have a slow takeoff there's a bunch of other problems too um but that's what we're trying to do you think gb4 is an AGI I think if it is just like with the UFO videos uh we wouldn't know immediately I think it's actually hard to know that when I was I've been thinking I playing with GPT 4 and thinking how would I know if it's an AGI or not because I think uh in terms of uh to put it in a different way how much of AGI is the interface I have with the thing and how much of it uh is the actual wisdom inside of it like uh part of me thinks that you can have a model that's capable of super intelligence and uh it just hasn't been quite unlocked what I saw with Chad GP just doing that little bit of RL with human feedback makes the thing some much more impressive much more usable so maybe if you have a few more tricks like you said there's like hundreds of Tricks inside open AI a few more tricks and all a sudden holy so that clip right there goes to show you that AGI isn't as far as way as we do think and even when AGI does actually arrive some of us might not even realize it's here because we are so used to the technological advances now one of the things I do think is also interesting thing about AGI and the future Technologies is what it will actually mean for everyone there is of course the discussion of the singularity which is a point in time that technology is so Advanced that we don't know what happens after that point and some have theorized that this is going to take place in the year 2045 which is around you know 22 years away or 21 years away now the only thing that we do need to focus on which is going to be the next step in AI is of course what AGI actually does mean for us now there are several different economic theories and possibilities that people have put forward but of course samman has given his insights on to what he thinks AGI is

### [13:38](https://www.youtube.com/watch?v=V9_uPASaptQ&t=818s) What does AI mean for us

actually going to be like for the vast majority of us so I would take a look at this clip because it actually does show us what the future of AGI means for everyone and I do hope as well that AGI once it is released it actually is good for everyone because like I said I do believe that AGI does have both sides of the Spectrum in its hands on one side you could have mass job displacement but on the other side you could have mass economic prosperity and advanced technology that is available and accessible to everyone um one of the challenges I think we had in talking about the work that you've done and that open AI is doing is helping people understand your vision of what artificial general intelligence means for our future and so can you help this room understand how their lives will be changed you said you can't predict the future but as we move forward you know what will AGI mean for all of us I think the two I mean there's many like important forces towards the future but I think the two most important ones are artificial intelligence and energy um if we can make abundant intelligence for the world and if we can create abundant energy then our ability to have amazing ideas for our children to like teach themselves more than ever before for people to be more productive to offer better healthare to uplift the economy um and to actually put those things into action with energy I think those are two massive things now they come with downsides and so it's on us to figure out how to make this safe and how to like responsibly put this in the hands of people I think we see a path now where the world gets much more abundant and much better every year and people have the ability to do way more than we can possibly imagine today and I think we're I think 2023 was the year we started to see that 2024 we'll see way more of it and by the time like the end of this decade rolls around um I think the world is going to be in a unbelievably better place it sounds sort of like silly and sci-fi optimism to say this but if you think about how different the world can be not only when every person has a you know today they have like chat gbt it's like not very good um but next they have like the world's best chief of staff and then after that every person has like a company of 20 or 50 experts that can work super well together and then after that everybody has a company of 10,000 experts in every field that can work super well together and if someone wants to go focus on curing disease they can do that and if someone wants to focus on making great art they can do that but if you think about you know the cost of intelligence and the quality of intelligence the cost following the quality increasing by aot and what people can do with that it's like a very different world it's the world that sci-fi has promised us for a long time and for the first time I think we get to start to like see what that's going to look like so yeah it could be one of the most successful Generations happy Generations that's filled with a lot of prosperity for everyone involved but of course like I said there's always other things that could happen now one of the people that has predicted AGI logical advancements with really good accuracy is of course

### [16:31](https://www.youtube.com/watch?v=V9_uPASaptQ&t=991s) Ray Kwell

Ray kwell and Ray kwell is Google's director of engineering and is a well-known futurist with a highh hitting track record for accurate predictions so of his 147 predictions since the 1990s he claims an 86% accuracy rate and essentially he made another prediction about the technological singularity that's going to happen in the next 30 Years so this is exactly what he said so he said 2029 is the consistent date I have predicted for when AI will pass a valid touring test and therefore achieve human level of intelligence I've set the date for 2045 for the singularity which is when we will multiply our effective intelligence a billionfold merging with the intelligence we have created so that is a startling prediction because 2029 might seem like it's far away but if you look back just 20 years or even 30 years at the technology we did have it definitely looks completely outdated so we can know that with the law of exponential returns and exponential growth and with things like Ms law where the number of transistors on a chip double every 2 years that means that in the year 2045 or even 2029 the levels of advancement in technology could be extremely significant which means that the future does get harder and harder to predict now something that you should note about AGI is actually the definition because AGI is really hard to Define but of course Google Deep Mind did come out with a paper that sets out the framework on how you can actually assess what skill level the AGI system has so when quote unquote AGI does arrive maybe in 2025 2026 or even 2029 we'll at least know where it stands on the scale so they released this paper called levels of AGI operationalizing progress on the path to AGI and then it says we propose a framework for classifying the capabilities behavior of artificial general intelligence models and their precursors so essentially if we actually take a look at some of these

### [18:28](https://www.youtube.com/watch?v=V9_uPASaptQ&t=1108s) Current AI Levels

levels you can see exactly where we are on this chart so you can see that we have three columns here we got performance generality narrow Ai and of course we have General AI now of course the narrow AI we've completely done that we have excelled and we've gone from level one all the way up to superhuman which is of course Alpha fold Alpha zero and stockfish which is outperforming 100% of humans where we are in the general area is that we are at emerging a AGI which is essentially equal or to somewhat better than an unskilled human an emerging AGI is of course llama 2 Google's B and chat gbt but we are just about to get to the level of competent AGI which is General AI which is a wide range of nonphysical tasks including metacognitive abilities like learning new skills and this is where we would be at least 50th percentile of skilled adults and this is not yet achieved so we would have to get to this level of AGI then of course we would need to get to expert AGI which is 9 % of skilled adults then virtuoso AGI which is at least 99% of skilled adults then superhuman AGI which would actually be artificial super intelligence which is not yet achieved so far it looks like the next step which is going to be competent AGI might be released very soon and this is good because although some people might not agree on the exact definition of AGI at least we do have definitions so we at least know exactly which kind of chart that we are in So currently we're at this stage emerging AGI and until a system does manage to get to at least a 50th percentile of skilled adults it will be interesting to see the kind of AGI systems developed then of course I'd

### [20:06](https://www.youtube.com/watch?v=V9_uPASaptQ&t=1206s) Open AI AGI

like to talk about this video in which we actually did discuss where some people were claiming that open AI had achieved AGI and this is of course from some statements from Sam Alman himself where he actually joked about AGI achieved SL AGI being achieved in Turner now whether or not there have been hiccups on that Journey or there have been some slight changes due to Open Eyes board being demolished and some other things happening I do think that some clips from the video and I will leave a link to the full video are definitely worth a second watch because the next 5 to 10 years are going to be some of the most important for Our Generation I even matter Jimmy apples if

### [20:43](https://www.youtube.com/watch?v=V9_uPASaptQ&t=1243s) Jimmy Apples

you didn't know is essentially an opening ey leaker we aren't sure of what he is or what he does but we do know that he always does have early information about GPT 4 or whatever open ey is currently working on if you're wondering ing about any skepticism regarding his statements previous statements around open ey have come out with 100% accuracy for example he actually tweeted and got the release date of GPT 4 a week before it was even announced meaning that he definitely has some kind of inside information now many sources speculate that this Jimmy Apple's guy is someone that spies on people near the open AI headquarters now you can see that there's a 10-minute video which goes into this and they talk about how Jimmy Apple says let's pick a random date and March 14th and then of course GPT 4 is of course announced on that exact date we know that this person whoever they are definitely has some inside information now back to the statement about AGI he said that AGI has been achieved internally on September the 18th 2023 now what's crazy is that a week later the CEO Sam Altman tweeted that AGI has been achieved internally okay and he tweeted this on a certain post and then what was crazy was that he actually edited this comment and said obviously this is just meing when AGI is achieved it will be announced and it won't be announced with a Reddit comment so at the same time it's seems that Sam mman is aware of this opening eye leaker and he understands that whatever is going on behind the scenes that Jimmy apples is likely to know a decent amount about it so at the same time I find it super interesting that Jimmy apples goes ahead and says AGI has been achieved internally then a little bit over a month later open AI start to talk about AGI and State AGI is simply their main focus and their core values anything that doesn't help with that is out of scope which leads me to believe that Sam Alman and his team at open AI are much closer to AGI than we initially

---
*Источник: https://ekstraktznaniy.ru/video/14614*