Get your Free AGI Preparedness Guide - https://theaigrid.kit.com/agi
🎓 Learn AI In 10 Minutes A Day - https://www.skool.com/theaigridacademy
🌐 Wan to learn even more AI https://www.youtube.com/@TheAIGRIDAcademy
Links From Todays Video:
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
Music Used
LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Оглавление (5 сегментов)
Segment 1 (00:00 - 05:00)
So Sam Alman recently made a statement at the AI summit in India where he was talking about the fact that there are probably two years left. Let's talk about it. So the statement 2 years left is arguably pretty controversial considering that the AI industry is split up of I would guess you could say 75% of people believing that AGI is right around the corner and the other 25% being purely skeptical. In this video, I'm actually going to give you guys a nuanced opinion because while statements like these might be a little bit controversial that the more that time goes on, the more data that there is to support the argument. So, let's first take a look at this clip and then we're going to dive into more of the information. On our current trajectory, we believe we may be only a couple of years away from early versions of true super intelligence. If we are right, by the end of 2028, more of the world's intellectual capacity could reside inside of data centers than outside of them. This is an extraordinary statement to make. And of course, we could be wrong, but I think it really bears serious consideration. A super intelligence at some point on its development curve would be capable of doing a better job being the CEO of a major company than any executive certainly me or doing better research than our best scientists. One of the things I do want to talk about that is really crazy and as I explain myself here, you're going to start to realize what I'm talking about. So Sam Alman, he's in India, I think, at the moment, and he's doing a lot of these tours. And like I said, we're going to get into the debate of whether or not this is just standard AI Hype Bro stuff because of course, as you may know, some people probably won't buy the fact that AGI is right around the corner. And some people on Twitter, which we can see right here, someone says, "Please, we're so close. Just a little bit more money, maybe a couple more billion dollars. " And I guess yes, you could say that the incentive is there for these billion-dollar companies to try and raise the hype up. But still, if we actually look at things with the nuanced opinion, AI is really advancing at a rapid rate. And so in this clip, what Samman actually talks about, and like I said, after this, I'm going to dive into the exact data. He talks about the fact that we keep pushing the lines backwards. goalpost forward and forward and forward. Take a listen to this. And I might just play the audio. I'm not going to show if I will show the full video clip. But as I was diving into this, it's actually really important to see the pace of progress. — Um, so AGI, how far — I mean AGI feels pretty close at this point. — Okay. — Like if you had asked me I think most people 6 years ago, what would you think if we had systems that could do new research on their own? What would you think if we had systems that could make an entire complex computer program on their own that could do pretty sophisticated knowledge work in all these different fields? You know, you could have one system that could act as an AI doctor, lawyer, computer scientist. We would say, "Okay, that sounds pretty general and pretty intelligent. " Um, we get used to whatever we have. Uh, but just watching how much the technology we already have is accelerating us internally, I would say it's pretty close. Um, and given what I now expect to be a faster takeoff, I think Super Intelligence is not that far off. — I don't know if you guys remember, but less than two years ago, it was Sora that was released, I think this time in February, and it literally took the world by storm, everyone was wondering how on earth had OpenAI done this. But less than 2 years after that, okay, if you were to use Sorret now, most people would be like, what are you even using it for? That is arguably terrible. And I mean, if we take a look at what is going on now, you can see with Sea Dance 2. 0, what is literally state-of-the-art. And it is pretty crazy what state-of-the-art is compared to 2 years ago. Now, the only reason I've included this is because I want to show you guys just how fast we're progressing. Not just video. Think about images. Images have come a long way since Darly Mini. Think as well about audio. Audio's come a long way. Text with reasoning models. That has also come a very, very long way. I mean, look at how incredible, you know, software and AI agents have become in terms of the actual quality. So, when you think about all of these things, just how quickly we've actually moved, I mean, in every single domain that AI currently exists online, there has been a rapid rate of improvement. And because of that, Samman is basically saying, look here, if you were to go back 2 years and show people the technology that we have now, most people would be incredibly surprised by what they see. However, in today's day and age, it doesn't seem like people are as amazed by the tech that they currently do have. And I think that is a key point. And the point I want to drive home to you guys in this video is the fact that when AGI does arrive or when
Segment 2 (05:00 - 10:00)
AI starts to become even smarter than it is now, we will probably be desensitized to how smart AI is because we're gradually going up the slope. It's not like AGI is going to pop out one day and just be able to do everything. The way how AI has been deployed at the current moment is so that it's gradually being deployed to the public. So there isn't that much of a crazy reaction. But I think this is something to keep in mind because most people won't realize it when it does occur. Now Sam Alman also had a talk at Stanford and I think this is one of the most difficult things to answer because of the fact that yes AGI may come in 2 to 3 years of course depending on different timelines which we'll talk about in a moment. He was essentially asked the question of you know what's going to happen to the people that graduate and he said that if you're a sophomore you'll graduate to a world with Agi in it. So, take a listen to this and then I'm going to explain exactly what he means in further detail. — So, I mean, we're all sophomores here. Um, there's a lot that's going to change in the next few years. I mean, before we graduate, I think, do I mean, do you think AGI will be — Yeah, I think if you are a sophomore now, you will graduate to a world with AGI in it. Uh, and it will be much, you know, in some ways it'll like be the same. You'll go move and get a job and think about having a family and whatever else. like the human you never bet against like sort of the human drives and what we care about staying the same but in some other sense it's just going to be wildly different uh you know I think we'll have AI uh improving at an incredibly rapid clip um society will be coming along uh at a slower clip but still very fast science will be getting automated uh what it means to start a startup will be totally different what it means to go work at a company will be totally different there's going to be a lot of change in the next couple of years. Uh you are lucky to be a student right now because you can go do kind of whatever uh and you can try a lot of things quickly. Um but and you are also lucky in that this is probably like the most interesting time to be starting off like adult life or a career. Um, but I think a lot of the traditional advice is not quite going to work. — And so essentially what he means here is that by the time today's 15 or 16 year olds finish school or university, AGI will likely exist. And remember the implications of that. Think about it. It's going to be pretty difficult to prepare and plan for that future. So you need to be able to navigate the fact that the world that you're preparing for today might not exist in 2 to 3 years from now. And of course you don't want to be someone who is behind when you know you're preparing for a future that may not even exist. So think about it if AGI is real. Let's say for a moment, okay, let's just put a pin in it and say, okay, for a moment AGI is real, those implications are profound. It's going to change jobs, education, businesses, creativity, pretty much everything. So even if you know you don't realize it, I think it's most certainly something that people should potentially consider because it's like training to become a taxi driver right before Uber, a DVD rental manager right before Netflix, a film camera technician right before iPhones. But this shift is probably going to be much bigger. And when these changes do occur, of course, you do want to be prepared for them. And funnily enough, one of the most profound statements, and this statement, it kind of jolted in me and it just really stuck with me because I also saw a tweet that echoed this same claim. And in this talk, Sam Alman was talking with someone on stage at the same talk that we just saw. And essentially he said is that the world essentially is not prepared. And I think that statement is true. Regardless of what you want to say about the AI bubble, whether or not he needs billions of dollars, I do agree in his statement that the world isn't prepared for an AGI or AI level software. Because when we actually take a look at how the world is currently set up, most people, if you just go across the board and look at AI usage, 80% of people only know what Chat GBT is. I would argue that watching a video like mine is probably putting you at the 1% of individuals who are the early adopters to AI who actually know what's going on and are probably going to be in that early majority which is of course a good thing. And if you guys think that is just, you know, a quick tidbit or a statement, you have to understand that like I saw a tweet yesterday. I'm actually going to bring this up because I really want to put things into perspective. And this is the tweet that I saw yesterday. David Breier says, "A reminder that you're in a bubble. more people Google WordPress than Claude Code. And it just goes to show, I mean, how much of a bubble there is in terms of AI people. And it's not like the current AI stock market bubble that people think there may or may not be. This is the kind of bubble where it's like you're in your own world. Anyone in the AI space right now is in
Segment 3 (10:00 - 15:00)
their own sphere. And when you actually take a look outside of that sphere, you jump out of that bubble for a minute and you go into the real world, you'll realize that AI just isn't present. Okay. And I know it might seem wild crazy, but I don't think AI has grasped the main level of consciousness yet in the way it should if people understood the implications of said technology. And I'm not saying that people don't know about AI. I'm just saying it doesn't seem like the world is acting in accordance with the fact that AGI could be 2 to four years away. I think the world would be a dramatically different place. We'd see certain laws in place. We would see a lot of different things. But the thing is that just isn't happening. The point I'm trying to make here is that the world is moving quicker than most people are prepared for. — The inside view at the companies of looking what's going to happen like the world is not prepared. We're going to have extremely capable models soon. It's going to be a faster takeoff than I originally thought and that is stressful and anxietyinducing. And so in that last part, you can see that Sam Alman said that the rate of change is borderline anxietyinducing. And I do agree with him. And I remember that when I started this channel, it was pretty fun. And then there actually came a certain point where it wasn't as fun and it was actually anxietyinducing because when you're diving through the headlines every single day and you realize the tech is about to come, it's about to change the world. Most people aren't prepared, you aren't prepared, those thoughts tend to keep you up at night. And I remember what I actually did at that time was actually made a guide to AGI for myself because at the time I just couldn't contend with the fact that there was no, you know, fully established framework that people were adopting so that they could at least prepare in advance for AGI. I mean, I think the worst situation you can be in is if you know something's coming, but you can't do anything about it. If you know something is going to happen, the best thing that you can do is prepare at the very least. That way, you're more prepared than the other person. And essentially all I did was I created a free PDF to guide yourself to AGI. I mean it's super useful. I will leave a link down to this in the description. Completely free. All you need to do just put your email in and then you get the download. And the reason I did this was because I realized that like most people don't realize the things and the type of questions that you should be asking because most people just haven't considered that AGI will eventually be here. And it's something that I've considered. I've changed my lifestyle and I was like, well, I'm fortunate enough to be in a space where I know probably more than the average person about how the economy may change in the next 5 to 10 years. And I think it would be wise for me to create some kind of framework that I continually look at so that when it does happen, I'm not completely thrown off the ball and surprised like everyone else will be. And so that's why I created that. And this video, as I was creating this video, it just reminded me that I do have this resource and I wanted it to share with you guys for free. Now, if we actually dive into some of the data and opinions before I get into some of the counter arguments, one of the things that I want to look at here is, of course, simple bench. Now, most people don't know what this is. This is basically a benchmark that isn't like the other standard test generation ones. Simple bench is a benchmark that is basically judging the AI's ability to reason like a human. So what it would say, you know, some example questions is like if I put some ice on this table and I come back in an hour and I pick up the ice again, what would happen in that situation? But a smart reasoning engine would realize that dude, the ice would probably have melted. Okay, so for example, you could say, okay, if I put five ice cubes on a fire and I come back in 5 minutes, how many ice cubes are left? The smartest AIs would reason that, okay, if you put ice cubes on a fire, then none would be left. But prior reasoning models wouldn't have realized this reasoning step. They would have just said you haven't moved any so five are left not realizing that fire creates heat which melts it into water meaning that your ice cubes would disappear. And so this benchmark I've actually focused on this because what this allows you to do is have models and realize which models are the best for implicit understanding. And you might be thinking well what does that have towards AGI? Well, I think it's one of those key benchmarks that doesn't get talked about, but it is one of those benchmarks that people will natively understand because the AI understands the deeper levels of reasoning that the human is trying to get at. And all I'm trying to say here is that this benchmark has been consistently improving and the rate of improvement was pretty crazy because I remember when this thing was around, you know, 10 to 15% just 1 and a half years ago and now it's rapidly approaching that human baseline. So because of that, it's kind of like surprising to me to see that this benchmark, which was pretty decent, is probably going to be saturated sometime soon. And it's not the only benchmark that is going to be saturated. And the thing is that like I know how people will say that, okay, well, AI benchmarks aren't real. There are massive issues with the benchmarks. Last year you had that paper from Apple which shows that LM aren't reasoning and that if you change you know certain questions you change the names and the values then the actual improvement drops. However, benchmarks like ARC AGI are changing things because those are
Segment 4 (15:00 - 20:00)
benchmarks designed to once again look at human reasoning. And that is why I included ARC AGI 2 and I also included Simple Bench because those are the two benchmarks that I would say test humanlike reasoning which is arguably what humans deal with on a day-to-day basis. And those benchmarks are the ones where you're seeing massive jumps. As you can see, Gemini 3 Pro went from 30% all the way to 84% on Gemini 3 Deep Think, which is pretty incredible. I mean, the ARC AGI benchmark is specifically designed to test how you can, you know, solve problems that you've never seen before. And it's not, you know, pattern matching from training data. It's basically one of those harder ones because it requires genuine abstract reasoning. And 84% in just short the space of time is pretty crazy. And this was, you know, a benchmark that was meant to measure that progress towards AGI. So the fact that they're now above humans, you have to think about it as in the sense that guys, this is a lot faster than we did think. I mean, I would not have guessed that 84% would have been there at this date. I would have said, yeah, maybe it's at 60%, maybe it's at 50%, but 84% is pretty crazy. And once again, if you look at LLM time horizon tasks, this is the MER time horizon benchmark, which measures how long an AI model can work autonomously on real software engineering tasks before it fails or needs humans helped. Think of it as how many hours can you get an AI to give a coding task and walk away and it will still get it done. Now, the key takeaway here is the exponential curve. You can see that, you know, things look fairly flat. And then in 2024 and 2025, it shoots straight up. Claude Opus 4. 5 hit around 5 hours and GPT 5. 25 hit around 6 to 7 hours. And now Claude Opus 4. 6 is pushing to 14 hours of autonomous work. That is essentially doubling in roughly 2 months, which is the point that most people are missing. Okay, most people will say that AI progress has stalled. But if you look beyond traditional benchmarks and you look underneath, if you just look beyond the surface level benchmarks, look at underneath in terms of how these models are reasoning, it doesn't seem to be slowing down. When you look at all of this data and you look at, okay, image, video, audio, reasoning, agentic work, all of these things are increasing. You start to understand Sam Alman's 2-year timeline. Then you jump from 30 minutes of useful work to 14 hours and that happened in about a year. Think about the full workday. Autonomy isn't far off. Now, if you're looking at other people's timelines, we have Dario Amade who said that look, we need the world needs to wake up to the risks of AI. He had a 19,000word essay titled the adolescence of technology. And he described the arrival of highly powerful AI systems as potentially imminent. and he wrote that I believe we are entering a right of passage both turbulent and inevitable which will test who we are as a species and I mean if you're wondering at Elon Musk's prediction he basically says that end of next year but I do want to say Elon Musk predictions you do have to factor in Elon time because he's let's just say a little bit more on the optimistic side — I don't know what's going to happen in 10 years but the rate at which AI is progressing uh I think we We're we might have AI that is smarter than any human by the end of this year. Um and I would say no later than next year. — Wow. — Um and then probably by 2030 or 2031, call it 5 years from now. Uh AI will be smarter than uh all of humanity collectively. Now, I want to show you guys some other clips that are super interesting, such as Yan Lakun, because some individuals believe that AGI is, you know, not as close as these individuals may seem. Yan Lakun is probably the most vocal when it comes to discussing AGI timelines that are further away than people think. And he actually says that there's no way in hell that AGI would be here by around 2027 to 2028. And we have to remember what he's talking about here. He's talking about a generally intelligent system. He is talking about a system that is as smart as a human and he says no matter what you say, hear from his colleagues, there is no way in hell this will happen. Now he has some fair enough reasons and it's going to be interesting to see who will be proven right. — Absolutely no way. Um and whatever you can hear from some of my uh more adventurous colleagues uh it's not going to happen within the next two years. There's absolutely no way in hell to, you know, pardon my French. Um the, you know, the idea that we're going to have, you know, a country of genius in a data center, that's complete BS, right? There's absolutely no way. What we're going to have maybe is systems that are trained on sufficiently large amounts of data that any question that any reasonable person may ask will find an answer through those systems. And it
Segment 5 (20:00 - 21:00)
would feel like you have, you know, a PhD sitting next to you. But it's not a PhD you have next to you. It's, you know, a system with a gigantic uh memory and retrieval ability, not a system that can invent solutions to new problems. Um, which is really what a PhD is. Okay, this is actually — it's you know connected to this post that uh tomul made that uh um you you inventing new things, you know, requires uh the a type of skill and abilities that uh you're not going to get from from Adams. And so here's where we have another comment from Yan Lakun where he basically says that the real world is simply too complex for an LLM to be able to do the tasks that it needs to do. — You have all those people bloating about like you know AGI in a year or two is just completely delusion. Just complete delusion cuz the real world is way more complicated and you're not going to get it. You're not going to get anywhere by tokenizing the world and using it's just not going to happen. And I kind of agree with him on certain points. On one hand, you do have LLMs that are incredible at certain tasks on a computer and reasoning and these benchmarks. But on the other hand, when you do think about what a human is actually able to do in terms of planning, reasoning, memory, LLMs do have significant flaws that are built into the architecture. And I think that of course there will need to be some core architectural changes that will happen in order for AGI. Probably maybe two or three breakthroughs just like Demis said. But of course, everyone is debating the timeline and usability. I do think that when we do get AGI, though, it probably will be a very blurry line. But of course, it's important to hear nuanced views because I always want to show you guys both sides of the spectrum.