# AI Godfather Stuns AI Community "No Way In HELL AGI In 2 Years"

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=Ny4XilJuuy4
- **Дата:** 30.03.2025
- **Длительность:** 20:35
- **Просмотры:** 31,671
- **Источник:** https://ekstraktznaniy.ru/video/13128

## Описание

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://www.youtube.com/watch?v=qvNCVYkHKfg&pp=ygUkd2h5IGFpIGNhbnQgbWFrZSBpdHMgb3duIGRpc2NvdmVyaWVz

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Транскрипт

### Segment 1 (00:00 - 05:00) []

No way. Um and whatever you can hear from some of my uh more adventurous colleagues uh it's not going to happen within the next two years. There's absolutely no way in hell to So that was Yan Lun and he clearly said that there is no way in hell we're going to get to AGI in 2 years. Now before you guys go whoa whoa whoa this is complete nonsense. Please do remember Yanlukan is a really accomplished and important scientist who actually paved the way that computers can see pictures and recognize things in them. He actually invented something called convolutional neuronet networks which are CNN's and he did this in the early 1990s and that invention became the foundation for how computers understand images today. So understand that this isn't some random guy. This is the guy who many people refer to as one of the godfathers of AI along with Jeffrey Hinton and Joshua Benjio because they all believed in and developed neural networks when most other scientists actually thought it was just science fiction. So remember that this guy previously predicted something would work when everyone else said it didn't. So maybe he's on to something here again. Now this guy is actually working at Meta and also he actually won the Turing Award which is basically like the Nobel Prize for computer scientists. So you have to understand that this isn't someone who is making rash statements. It's rather someone you should probably be at least paying attention to and not being very skeptical to listen to because whilst yes, there is AGI hype by many different company founders, this is someone who clearly knows their stuff and has been in this space for longer than some of us have been alive. So, let's actually take a look at this full clip. And I want to dissect a few things from the statement because as time goes on, I'm actually leaning more towards believing everything Yanlen says simply because certain things in AI just don't make sense. But listen to what he says. Absolutely no way. Um, and whatever you can hear from some of my uh more adventurous colleagues, uh it's not going to happen within the next two years. There's absolutely no way in hell to, you know, pardon my French, um the, you know, the idea that we're going to have, you know, a country of genius in a data center, that's complete BS, right? There's absolutely no way. What we're going to have maybe is systems that are trained on sufficiently large amounts of data that any question that any reasonable person may ask will find an answer through those systems. And it would feel like you have, you know, a PhD sitting next to you. But it's not a PhD you have next to you. It's, you know, a system with gigantic uh memory and retrieval ability, not a system that can invent solutions to new problems. Um, which is really what a PhD is. Absolutely no way. Um, and so one of the things he actually says in this interview that most of you probably did miss is that he made a statement, okay? And he said, and I'm going to rewind it to where he says it, okay? He actually said, "There's no way in hell, pardon my French, the idea that we're going to have a country of geniuses in a data center. " Now, to most of you, it's probably just normal statement, but I remember reading something with that exact line in it. Now, what he's referring to, okay, and I don't want to start a Twitter war or some kind of, you know, someone storing shade here. I'm just doing some investigative journalism, but country of geniuses in a data center, I've only seen that referred to in one essay. Okay? And that essay is one by Dario Amade the CEO of Anthropic the company that produces the models claude. Now Dario Amade made this article/blog/ essay whatever you want to call it. And in this he talks about the future of AI and how AI could transform the world for better. It's called machines of love and grace. I've done an extensive video on it. It's super intriguing and really lets you understand where AI is going to go. Now crazily okay he actually refers to you know powerful AI as a country of geniuses in a data center. Now take a look okay because we need to understand what he's talking about with this context okay because the whole thing is that you know are we going to have AGI in a data center but take a look at what Daria Amed actually says and the reason I need you guys to understand the context is so that you can understand the claim that Yan is refuting. So the claim he's referring to is by powerful AI a model similar to today's LLMs in some form though it might be based on a different architecture may involve several interacting models and it's going to have the following properties okay so these are the qualities that you need to understand when you're looking at an AI in a data center okay or a country of geniuses he talks about you know the fact that it's going to have you know smarter than a Nobel Prize winner across relevant fields biology programming math engineering and writing this means it can prove unsolved mathematical theorems write extremely good novels and write difficult code bases from scratch. Also, it's going to have interfaces available to a human working virtually, including

### Segment 2 (05:00 - 10:00) [5:00]

text, audio, video, mouse, and keyboard control, and internet access. It can engage in actions, communications, remote operations, given directions to humans, experiments, watching videos, making videos, and so on. And again, it does all of those tasks with the skill exceeding that of the most capable humans in the world. Now each of these millions of copies can act independently on unrelated tasks or if needed can all work together in the same way that humans would collaborate perhaps with different subopuls fine-tuned especially good at particular tasks. And you can see right here that this is where he says we could summarize this as a country of geniuses in a data center. And of course, the resources used to train the model can be repurposed to run millions of instances in this and the model can absorb information and generate actions roughly at 10 times to 100 times human speed and of course probably going to be limited by the physical world. So that huge claim is what Yanlick is basically saying. There's absolutely no way that is going to happen. Now of course other individuals have discussed this and it's super interesting to hear their take. So if you imagine a million geniuses in a data center, each one of which is sort of Nobel Prize winning at its general capability, but running at a hundred times human speed with never gets tired, never sleeps, doesn't have to eat, doesn't have to go on vacation. Um, you can just copy and paste them and have as many of them as you want. They can interact with each other. They can sort of build plans together. You know, there's a million of them. It's more like a society or a nation or a civilization than an AI system. So if you imagine that that's something we could have in a few years. Now if you ask yourself like how do I control that thing or how do I uh just turn it off? You start to think well that's more like turning off a country than a machine right that thing if it knows that you want might want to turn it off is going to take measures. Um it's remember Nobel Prize winning level of intelligence and there's a million of them operating at 100 times human speed. They're way ahead of us. In other words, you. So Yanlakan here is clearly saying, "Look, there's absolutely no way that this is going to happen. There's no way that we're going to have this. " Now, of course, he's basically referring to the fact that everyone else in the AI industry is predicting AI. It's probably going to happen quite a lot earlier. And yall is basically saying that look, everyone is probably going down the wrong path. And when we look at people's predictions, which is what I want to quickly focus on here, we can see that, you know, people like Dario Amade, the person who wrote Machines of Love and Grace, they do have much shorter timelines than others. Dario Amade recently said his timeline is potentially 2026 to 2027. So 2026 2027 is when you effectively get to AGI across the board and it's the threshold moment. Whoever's ahead then is ahead forever. Is that what you're saying? Potentially. I mean, we don't know these things, but there there's a risk of this that's happening at a particularly uncertain time geopolitically, but we'll get to that. I've heard those same years, you know, floated as No, I'm I'm aware of how frightening this is. If we take a look at Demis, we can see that he has a more conservative timeline. This is the boss of Google's AI at Deep Mind. We've been working on this for more than 20 plus years. Um, we've sort of had a consistent view about AGI being a system that's capable of exhibiting all the cognitive capabilities humans can. Um, and I think we're getting, you know, closer and closer, but I think we're still probably a handful of years away. And when we actually take a look at what's going on at OpenAI behind the curtain, there are apparently PhD level super agents in the works. Certain articles refer to OpenAI's PhD level agents. They're, you know, super agents AI tools designed to tackle messy multi-layered real world problems. The human mind struggle to organize and conquer. Not responding to just a single command, pursuing a goal. Super agents synthesize massive amounts of information, analyze options, and deliver products. And it was crazy because several OpenAI staff have been telling threads that they are both jazzed and spooked by recent progress. So on one side, you have, you know, individuals at OpenAI saying, "Look, there are these crazy AI agents. " You've got Dario Amade saying, you know, AGI probably in the year 2026 and 2027, but Yanukan is saying there's absolutely no way in hell. It's absolute BS that everyone's talking about. But what is Yanukan's actual AI prediction? First of all, so how long is it going to take? So I think to have possibly a system that at least to most people feels like it has seal intelligence as humans if all of the plans that all of the things that we are imagining will work. Okay, so those JPA architectures and you know some other ideas that we're playing with succeed. I don't see this happening in less than five or six years. Okay, but now is it going to happen in five or six years? And I think the there's a distribution with a tail that's very long and the history of AI is that people just keep underestimating how hard it is. I'm probably making the same mistakes right the same mistake right now. you know when I say five six years this is if

### Segment 3 (10:00 - 15:00) [10:00]

you know we don't run into a major obstacle that we didn't foresee if all the things that we're planning to try out actually work if things can scale if uh computers you know accelerate and all that stuff like you know there's a lot of things a lot of planets that need to line up for this to happen so that's the best case right it's not going to happen next year like you might have heard from from you know some other folks Samman yeah right yeah hopefully Sam Alman you know various people uh or you know, Dario embody, yeah, it's going to happen within the next two years or something. Uh, no. And so, we have to sort of think about the fact that, okay, Yanlen clearly says that AGI is not going to happen. And I think he's not the only one that's saying this. There are various AI critics out there, and I do think that they do have some stable ground to stand on, but like I said, this guy's accomplished in the AI space. He's super important in the scene. And remember before he was talking about stuff that everyone had practically written off. So it makes sense to actually pay attention to what he's stating. And I've shown you guys this clip before, but this is another clip from a recent interview where he actually describes a simple calculation about the fact that we need a lot more than we currently have. And LM simply won't lead to AGI. So, take a look at this and then I'm going to show you guys another short clip in which he explains his reasoning behind this because I didn't explain the reasoning at the beginning and some people might just think that he's just criticizing cuz he's a meta. But trust me guys, it's based on real stuff. But then you compare this with the amount of information that gets to our brain through the visual system in the first four years of life and it's about the same amount. In four years, a young child has been awake a total of about 16,000 hours. um the amount of information getting to the brain through the the optic nerve is about 2 megabytes per second. Do the calculation and that's about 10^ the 14 bytes. It's about the same. In four years, a young child has seen as much information or data as the biggest LLMs. And what that tells you is that we're never going to get to human level AI by just training on text. We're going to have to get systems to understand the real world. Um and that understanding the real world is really hard. And so he actually said something that was, you know, I don't think it's that revolutionary to say that LLMs are not going to achieve AGI because they're just built on text. I mean, you're trying to recreate human level reasoning just using text. Now, of course, there were various different debates and arguments around this, but at face value, it doesn't seem that crazy what he is saying when we realize that humans have been engineered for millions of years and the brain is a really sophisticated system. Now, Meta's CTO has actually also said to not bet against Yan Lakun in this recent interview where he talks about robotics and the fact that, you know, the world models they're trying to build just need a bit more data. They just need something else. I love where we are. I I'm a huge believer. I also want us to invest in these world models and I want to free it from the terminal. And is the world models and the ter the free from the terminal are those fundamentally new different technologies from the kind of scale transformer? Is the is it kind of a mixed, you know, combination like what's what's the kind of looking through a glass darkly thinking about how this would be done? Yeah, I think um I really think that the world model is a new different thing. Uh and as a consequence, I think it's an unknowable timeline. You know, I'm reminded a little bit when I think about the world model of where we are with machine learning when I was an undergraduate and I graduated Harvard in 2004 and I taught a class introduction to artificial intelligence. It's actually, you know, where I met Mark. uh he was a student of mine in that class. We taught as a matter of fact that neural networks were a once promising now known to be dead technology. Mhm. And everyone had to build a handwriting recognition neural network and it worked and they were like, "Yep, that's all you can use with it for that's it. Congratulations. Here's your degree. " Okay. And now neural networks around the world. And Yan Lun was right, you know, and uh and Jeffrey Hinton and Benjio like they were right. And they all won the touring award. God bless him. Um, and it took a series of other unlocks, GPUs in particular, to get to the point where that technology could flourish. So, I don't, you know, I wouldn't bet against Yan Lun over a long enough timeline, but I don't know what the unlock is yet on that piece. I think the embodied part can do both. The embodied part will benefit a lot and maybe help a lot with world modeling. um once you have these sensors out there and you have better richer data when you have robotics data which will give you proprioception uh friction like I think that is going to be a big unlock so I think the embodied one is actually I probably I sequenced it wrong the embodied one is probably the in between it benefits the current models a huge amount to get that data uh and to be in that context and also it probably is some of the data that you need to start to understand what it takes to build the world model and so I think what he said there is really important you definitely need a rich data source and Of course, you do need real world data if you are trying to build systems that can act in the real world and of course can achieve goals set by humans that are existing in the real world. Humans just don't exist in the text paradigm. We've got vision, we've got smell, we've got touch, we've got all these kinds of pieces of data that are constantly streaming into us that make up our world model that allow

### Segment 4 (15:00 - 20:00) [15:00]

us to reason very effectively. So, I think that is of course a really important point. Um, we're running into the information theoretic limits of it. um you know and we've talked about throwing compute at it and scaling laws. Um but if you go all the way back to like uh Norbert Weiner in his cybernetics of the first you know constructs of information theory this idea of how many bits can we pull out of something that are generalizable that are sufficiently generalizable bits and we're finding out that for all the corpus of human media ever produced it's not enough. We're finding that in robotics. Um, you know, uh, robotics is an effort that we've kicked off recently inside of Meta in partnership kind of as kind of an adjunct to our llama program. Um, and when you no matter how many videos you have of somebody grabbing a coffee cup, um, there's you're actually not getting the data you need because you don't know the proprioception of how much force is applied and how we detected. Okay, this is a plastic cup. It's going to deflect to a certain point. Um, and uh, there's condensation on it. So, I need to apply a little bit more force to counter the loss of friction that I'm experiencing. We do that autonomically. We don't even there's not a single conscious thought in our head when we're doing those things. Uh, Arya, when you're taking your phone out of your pocket, you don't know what the angle of your second digit is or how much force you're applying with your thumb to avoid getting the keys. To some degree, the things that we think of as intelligence, we're talking about um, you know, the higher order functions of the human brain. That's arguably the less impressive part of intelligence. The deep brain, the mamlon, the amydala, the that that lizard brain intelligence that is wildly hard for us to capture in the modern era. So, as much as I'm excited about the word calculator, I really do believe in Yan Lun's vision that you have to do this pioneering work to break through to a world model um that has common sense that understands causality in a more substantial way, not in a statistical kind of soup way, but in a in a model based way. You could see that exactly what I was just saying. You know, the point is that humans have this embedded level of intelligence that we truly do take for granted. And training these AI systems just on visual data like just giving it a video so that it can understand the world and just giving it some text. These aren't true, you know, embodiment of real world intelligence because they don't really exist with a combination of everything else. There's all of this, you know, unspoken intelligence, so to speak, that goes on that we just simply take for granted. So what does Yanlukan say is the solution? Because he's given of course the statement that look this is not going to happen at all. You know meta CTO has agreed and of course they said you know we're throwing all this data and nothing is really working. So Yakin gave a presentation in which he spoke about you know abandoning generative models in favor of joint embedding architectures. He says, "Abandon the probabilistic model in favor of energy based models. Abandon contrastive methods in favor of regularized methods and abandon reinforcement learning in favor of model predictive control. " So you can clearly see here he says if you are interested in human level AI do not work on LLM. So remember I spoke about in another video that you know in fact it's actually this video that OpenAI Solman all these companies is talking about AGI and ASI. However, if you're interested in human level AI, maybe LLMs are not the way forward. So, take a look at this clip because I think this is really important because he actually explains it better than I ever could. But he basically says that, you know, we've got a long way to go in terms of actually unlocking the real intelligence that AI systems need to have to perform complex tasks. Let me go to the conclusion. So, I'm having a number of uh recommendations. Abandon generative models. The most popular method today that everybody is working on. Stop working on this. You work on jetpacks. Those are not generative models. They predict in representation space. Evident probability models because it's intractable. Use energy based models. Uh mav have had like a 20 year contentious discussion about this. Um abandon contractive methods in favor of those regularized methods. Abandon reinforcement learning. But that I've been saying for a long time. We know it's inefficient. um you have to use reinforcement learning really as a last resort when your model is inaccurate or your cost function is inaccurate. Um but if you are interested in human level AI just don't work on LLM. There's no point. I mean in fact if you are in academia don't work on LLM because you're in competition with like hundreds of people with tens of thousands of GPUs like there's nothing you can bring to the table do something else. Um there's a number of problems to solve u training those things with you know large scale data blah blah planning algorithms are kind of inefficient we have to come up with better methods so if you are like into optimization applied math it's great um ja with latent variables and so this is where you know you can clearly see that he says to abandon lms and work on different kinds of architectures that

### Segment 5 (20:00 - 20:00) [20:00]

are focused on real level intelligence not just quote unquote predicting the next token so let me know what you guys think about this do you think Yanlan is right I mean of course I don't I think most of us are esteemed are established AI researchers. But of course, I do think that he has there's definitely merit to what he's saying. I think, you know, predicting the next token is of course leading to some forms of intelligence, but replicating the human level of intelligence, the complexity. I think it's probably a lot further than we think and I think we might see that in the future. I mean, who knows? It's going to be super interesting to see because there are a lot of things to consider. But I did want to make this video because I thought that it was rather interesting.
