# The AGI Debate That's Currently Dividing Google And Meta

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=nflOt5HOE-k
- **Дата:** 23.12.2025
- **Длительность:** 15:39
- **Просмотры:** 31,930

## Описание

Checkout my newsletter : - https://aigrid.beehiiv.com/subscribe
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Learn AI With Me : https://www.skool.com/postagiprepardness/about

Links From Todays Video:
https://x.com/demishassabis/status/2003097405026193809

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=nflOt5HOE-k) Segment 1 (00:00 - 05:00)

So, there is a pretty fierce debate going on right now about AGI and we need to talk about it. So, it actually all starts with this interview clip where Yanikan is essentially saying that general intelligence doesn't exist. I'm going to play the clip for you and then we're going to get into some explanations of exactly what he's talking about. — There is no such thing as general intelligence. This concept makes absolutely no sense because it's really designed to designate human level intelligence, but human intelligence is super specialized. Okay, we can handle the real world really well like navigate and blah blah. We can handle other humans really well because we evolved to do this and chess we suck. Okay, so and there's a lot of tasks that we suck at that where a lot of other animals are much better than we are. Okay. So what that means is that we are specialized. We think of ourselves as being general but it's simply an illusion because all of the problems that we can apprehend are the ones that we can think of, — right? And vice versa. And so we're general in all the problems that we can imagine. Okay? But there's a lot of problems that we cannot imagine. — And there's some mathematical arguments for this which I'm not going to unless you ask me. So Yanikan here is basically saying that what we call general intelligence is really just human level intelligence and human level intelligence itself is not general. Instead it's highly specialized for the kinds of environment and problems we've evolved to survive in like navigating the physical world, reading social cues and interacting with other humans. Now I think what he's trying to say here is that you know we feel general because we are good at problems that we can actually imagine. Like if the problems that make sense to us, it's usually because our brains are somewhat already evolved and adapted to handle those kind of problems. But that doesn't mean that you know intelligence itself is universal. It just means that our brains are tuned to that specific subset of reality. There is a lot of things that we know we don't know. Now in this interview as well, he points out that there are many tasks that humans are objectively bad at, things that, you know, other animals outperform us at easily. And I mean, if you've ever seen certain animals do certain tasks, I don't have a video clip of this right now because I do think it's copyright, but I want you guys to see this video clip where there's like an ape or a chimp and it is able to react very quickly to numbers on a screen. And trust me, I've tried to do this myself before. Humans just wouldn't be able to compete at the ape with the way it's like kicking the numbers on a screen. I'm going to leave a link to the video cuz it's genuinely unbelievable. But the point, you know, that Yanlakan is making here is that if intelligence were truly general, we wouldn't see such extreme trade-offs and what we see everywhere in nature is specialization and humans are no exception. We're just specialists who mistake that familiarity for universality. Now, of course, this matters for AGI because Dan Lern is saying that there's no such thing as Agi and he's not saying that AI can't get powerful. He's basically saying that look when you think about the intelligence of absolutely everything. One single system that can handle all the possible problems equally well that doesn't even make sense mathematically, biology or even practically. From his perspective, intelligence always comes from the blind spots. And honestly, I somewhat agree with this because an intelligence system that could do everything just, you know, that would essentially be some kind of strange super intelligence. Now, this is where things get a little bit interesting because this is where we get into Deis Sarabis, the essentially CEO of Google Deep Mind, which is Google's AI division, and he says something super interesting. Essentially, he says here that Yanlick is incorrect. And so, when Demos says that Yanlick is just plain incorrect here, he's saying that he's confusing general intelligence with universal intelligence. So, this is what we need to discuss because he's made a very long post that we're going to dive into here. And it's really interesting to see how the Twitter discourse is converging around this topic because I think maybe towards 2027 or even maybe towards the end of next year, we could actually have some kind of protoagi. So it's genuinely going to be super interesting to see. So he says that brains are the most exquisite and complex phenomena we know of in the universe so far and in fact they're extremely general. So essentially what Demos isn't saying here is that intelligence is unlimited or you know magically good or everything. He's basically saying that you know Yan Leon is mixing up two different ideas. general intelligence versus universal intelligence. Now, universal intelligence would mean solving literally any possible problem optimally and of course we know that doesn't exist and nobody seriously claims it does but general intelligence is something else entirely. So, Tobis is you know arguing here is that you know the brain is basically general because humans in fact are basically general systems within of course their confined system. So with enough time, memory and data, the same underlying brain architecture could learn language, mathematics, physics, chess, music, engineering, social reasoning, things that evolution never explicitly designed us for. And that flexibility is what generality means. Now, of course, humans do have limits. You know, specialization doesn't cancel the generality. So we specialize when we are learning specific

### [5:00](https://www.youtube.com/watch?v=nflOt5HOE-k&t=300s) Segment 2 (05:00 - 10:00)

tasks, but that doesn't, you know, disprove the general intelligence. When we think about it, it just reflects the practical constraints like finite memory, time and compute. Even a general system still has to focus its resources. Specialization is a consequence of learning efficiency. Note evidence that the intelligence isn't general. So this is where Demetus is bringing up the no free lunch for him. And this is important to understand cuz I didn't really know what this was before. And basically what he's saying here is that there is no single algorithm or learning algorithm that you know performs best on all possible problems. So, if you optimize a system to be amazing on one type of task, you're necessarily making trade-offs that it may be worse at other types of tasks. It's not really a limitation of the current technology. It's essentially just mathematical law. So, think about it like this. If you design a vehicle that's perfect for driving on roads, it's going to be terrible at flying. And if you optimize a vehicle for flying, it's going to be terrible at driving. You can't really have maximum efficiency in that same system. And I think if you've ever seen a car that can actually drive and you know drive on water, those two vehicles are clearly never the same. Now, if you think about what this actually means for intelligence, this is where it gets interesting because Demis is actually, you know, acknowledging Lan's point. He's saying, "Yes, you're right. In any practical finite system, there has to be some degree of specialization because you can't escape the no free lunch theorem. " But, and this is his key counter, that doesn't mean the system isn't general. It just means that it's not universal. So a human brain is specialized around the types of problems that humans encounter. The physical world navigation, the social interaction, the pattern recognition. But within that specialization, it's incredibly general purpose. We can learn languages, mathematics, music, sports programming, an enormous range of tasks. So yes, we're specialized, but we're specialized towards generality if that kind of makes sense. So this is why the terminology matters so much. When we talk about building AGI, I think some people get confused and there's a whole confusion about, you know, are we trying to build something that's optimal at every possible task in the universe. When we think about it, that's basically impossible according to the no free lunch. What we're aiming at for, you know, AGI is a system that is like humans has broad learning capabilities within a practical architecture. So, it's not going to be the best at any single task, but it should be able to learn and adapt across many domains. And that specialization is going to come from the architecture in training. What types of data it processes, how it represents information. But the generality comes in its ability to flexibly apply the architecture to diverse problems. Now, Demis continues on talking about generality. And this is where he talks about you know the point about generality in this theory is that the touring sense the architecture of such a general system is capable of learning anything computable given enough time and memory and the human brain. So this is where Demisa says, but the point about generality is that in theory, in the touring machine sense, the architecture of such a general computable given enough time and memory and data and the human brain and AI foundation models are approximate touring machines. So this is where Demis is bringing out the heavy artillery, the touring machine argument. And this is it's kind of crucial to, you know, why he thinks general intelligence is real. Now, a touring machine for those of you guys who may or may not know, it's a theoretical model of computation from the 1930s. Now, what makes it special is that it's universal, meaning with enough time, memory, and the right program, it can compute anything that's computable. It's the foundation of all modern computing. So, you know, Nemesis's point here is that human brains are essentially approximate touring machines. We can in theory learn and solve any problem that can be solved through computation given enough time and information. Now, we're not perfectly efficient. we have limitations but the underlying architecture is general purpose and that's why he adds that you know foundation models like GPT claude or Gemini are also approximate Turing machines and this is massive he's saying that you know these systems kind of have the same generality as human brains of course they're not optimized for every task but they have the architectural capability to learn an enormous range of tasks and we've already seen this a little bit with the same base model that writes poetry can code analyze data can play chess can help with medical diagnosis It's not really narrow AI anymore. That's like demonstrating general capabilities even if it's not perfect at everything. So this is, you know, where Deis sub is just going in on the point that yes, there are practical constraints and specializations, but the fundamental architecture, the entire thing is actually general. The fact that both human brains and modern AI systems approximate touring machines means they possess genuine generality. The ability to learn and perform across a vast range of tasks, it's not really an illusion. It's not just like specialized modules pretending to be general. It's genuinely flexible general purpose learning machine. The limitations that we see aren't architectural. They're just practical constraints like time, memory, and training data. And of course, as we change these systems, as we scale the systems up, we're probably going to see that generality express itself more and

### [10:00](https://www.youtube.com/watch?v=nflOt5HOE-k&t=600s) Segment 3 (10:00 - 15:00)

that's what Deep Mind are saying here. Then of course, Deos Sabis basically says with regards to Yan's comments about chess players, it's amazing that humans could have invented chess in the first place, let alone get, you know, someone as brilliant at it like Magnus, he may not be strictly optimal. After all, he has finite memory and limited time to make a decision. But it's incredible what we can do with our brains given they were evolved for hunter gathering. So, this is a really cool point because he's basically saying that look, humans are the ones that were evolved for, you know, hunter gathering, those typical caveman activities. But when you look at what has occurred here, we literally have brains that can play things like chess even though we don't have, you know, decision trees in our head, you know, infinite memory, the ability to run simulations of every single game. It's really, really incredible because Yan is saying that humans are terrible at chess compared to computers and he's using that as evidence to say that we're specialized, not general. But Demis is saying, "Wait a minute. Think about it. " On the other side, we invented chess from scratch. we, you know, invented this complex strategic game with no evolutionary pressure to do so. And yet we created it and we're kind of good at it. Well, you know, not me, but some people are good at it to the point where they're basically superhumans. So, they're arguing that generality it really does exist in humans cuz we can adapt to a broad range of tasks. And of course, we're not saying that Magnus is, you know, the specialized chess machine, but although he might be, but we're saying that, you know, the fact that a human brain, which is designed for a completely different purpose, can reach that level of chess mastery is pretty incredible. It's like taking a bicycle and somehow getting it to fly. It's not going to be as optimal as an airplane. But the fact that you can actually make it fly shows extraordinary adaptability. We took brains optimized for recognizing predators and finding food and repurposed them to calculate chest positions 15 moves deep. That's pretty much the definition of general purpose intelligence. Now, of course, Elon Musk comes in and says, "Yes, Deis Aras is right. " I think this one you have to take it with a little bit of a grain of salt because if you haven't been aware with the past of Elon Musk and you know, Yan Leon, they've been arguing back and forth and back and forth about who's an AI researcher, who does real research. It's been pretty entertaining to see. Um and then of course a few hours after this post Yanukan actually did responded and he said you know he he wrote a long post and he said this is where he says I think the disagreement is largely one of vocabulary. I object to the use of general to designate human level because humans are extremely specialized. You may disagree that the human mind is specialized but it really is. It's not just a question of theoretical power but it's also a question of practical efficiency. So what Yanukan is saying here is that this is mostly just arguing about words. When he's saying that humans aren't general, he's basically saying that we are extremely limited compared to what's theoretically possible. Yes, a human brain could technically compute anything given enough time and pens and paper, but that doesn't make us general in his words. It makes us incredibly specialized at a tiny slice of possible problems. So he actually, you know, in this entire thing, it's it gets a bit longer, but I'm just going to simplify what he's saying here. So he uses this neural network analogy and he says, think of it like this. Technically, a simple two-layer neural network can learn any pattern, but in practice, it would need a ridiculous number of neurons to do anything useful. That's why we use deep learning with many layers instead. And humans are the same. Technically, we could solve lots of problems, but we're wildly inefficient at most of them. And so, he uses this vision example. So, he says, here's a concrete example. Your optic nerve has about 1 million nerve fibers. Each one is either firing or not. So, it's basically 1 million on andoff switches. Now, how many possible ways could your brain process that visual information? The math is insane. There are two to the power of 2 to the^ of a million possible ways to process vision. And that's a number that's so large, it's basically incomprehensible. Roughly 10 with 300 trillion zeros after it. So your entire brain, all 100 billion neurons and 100 trillion connections can only represent about two to the power of three, I think it's trillion or a lot more than that. the numbers very large different patterns and it sounds so huge but compared to the total possibilities it's like a single grain of sand on the size of the observable universe we can only process an infinite small amount of possible visual patterns and Einstein said the most incomprehensible thing about the world is that the world is comprehensible so what Yan is essentially saying here is that it's actually miraculous that humans can understand anything about the universe because out of all the infinite ways reality could be organized most of which would look like complete random noise to us. We happen to evolve brains that can comprehend a tiny sliver of it. So everything we can't understand, we just, you know, think about everything that humans cannot understand. What do we call it? We just call it entropy or randomness and then we ignore it. And most of the universe is incomprehensible to us because our brains are supposed to specialized, but of course we don't see it because it's our brain. So Yan essentially saying here that humans, yes, they're technically cheering machines in theory, but in practice

### [15:00](https://www.youtube.com/watch?v=nflOt5HOE-k&t=900s) Segment 4 (15:00 - 15:00)

we're just specialized for an incredibly narrow range of problems that we don't realize because we can't even perceive those problems that we're bad at. And calling that general intelligence is misleading. So when you think about the argument between these two, it's super interesting. We got Demis saying that, you know, we're general with impractical real world constraints, and then Yan is basically saying those constraints are so limiting calling it general is wrong. So, it's basically arguing whether a Swiss Army knife is a general purpose tool. And Demis is saying yes, it is because it does many things. And Yan saying is no, it isn't because compared to all possible tools in the entire universe, it can't really do much. Let me know where you guys stand on this cuz I think this is super interesting.

---
*Источник: https://ekstraktznaniy.ru/video/12482*