# Microsoft CEO's Stunning Statement "AGI Is Nonsense"

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=BuYGFhkiPLI
- **Дата:** 04.03.2025
- **Длительность:** 18:36
- **Просмотры:** 56,276
- **Источник:** https://ekstraktznaniy.ru/video/13250

## Описание

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

00:00 Intro
00:24 Satya Nadella on AGI Definition
00:48 Cognitive Labor and AGI
02:09 Dynamic Nature of AGI
02:32 Sam Altman's AGI Perspective
03:11 Nadella on AGI and Economic Growth
04:04 Real-world Value vs Benchmark Hacking
06:01 AI Infrastructure Investment Risks
06:14 Supply and Demand in AI
07:08 OpenAI’s AGI Levels Explained
08:02 CEO Predictions on AGI Timeline
09:17 Andrew Ng's View on AGI Timelines
10:13 AGI Complexity Explained
13:14 Missing Components for AGI
14:24 Comparing Human and AI Intelligence
16:06 Narrow AI vs. AGI
17:04 OpenAI’s Superintelligence Goals
18:20 Conclusion and Final Thoughts

Links From Todays Video:
https://x.com/tsarnick/status/1882525450955886818 (demis hasbis)
https://www.reddit.com/r/singularity/comments/1gp2o2m/anthropics_dario_amodei_says_unless_something/ (Dario Amodei
Andrew ng (h

## Транскрипт

### Intro []

so Sachin Adela actually made a statement on the dwesh Patel podcast where he said something that ruffled some feathers he actually stated that the definition for AGI is one that is quite useless and he thinks the real definition is a little bit different so in this video we'll dive into exactly what he thinks about AGI and what the entire industry has to say in reaction to that so let's take a look at what he

### Satya Nadella on AGI Definition [0:24]

says in this first clip here where he basically says that the definition for AGI is essentially one that would be constantly changing in practice um I one thing I'm not sure about hearing your answers on different questions is whether you think AGI is a thing in the sense of like will there be a thing which automates all at least like starting with all cognitive labor like anything anybody can do on a computer see this is where I my problem with the

### Cognitive Labor and AGI [0:48]

definitions of how people talk about it is cognitive labor is not a static thing right like um there is cognitive labor today um if I have an inbox that is managing all my agents is that new cognitive labor and so today's cognitive labor may be automated what about what is the new cognitive labor that gets created both of those things have to be thought of right which is the shifting so that's why I think this distinction at least in my head I make is don't conflate knowledge worker with knowledge work uh the knowledge work of today could probably be automated uh who said my life's goal is to triage my email right let an AI agent triage my email uh but after having triage my email give me a higher level cognitive labor task of hey these are the three drafts I really want you to review like that's a different abstraction and so if you didn't understand what he said there he basically said that look this AGI Benchmark thing doesn't really make sense one of the key things he actually talks about here is that cognitive labor isn't something that is you know static so cognitive labor is something that is constantly Dynamic and what that means is essentially the cognitive labor of today could be very different from the cognitive labor of tomorrow and that

### Dynamic Nature of AGI [2:09]

also means that essentially we need to be analyzing what kind of tasks need to be automated and of course you know that basically means that if AGI is something that you know automates all tasks that is going to be something that is constantly changing so the definition of AGI is also going to be constantly changing and I think it's really important to make that distinction because AGI is genuinely a vague term and I would say that across the board we

### Sam Altman's AGI Perspective [2:32]

have different definitions for different people now and I will say that samman does have his definition here which is pretty similar to Sachin adel's in the sense that it needs to provide some real world value I think at this point we're kind of close enough that the precise definition of AGI matters you know by the time you have a world expert in every field working together tirelessly I think that's beyond what most people would consider AGI um so I think rather than talk about you know when we're going to get there whatever or yeah I sorry that was really in articulate I would say I think we'll get to Something in the next couple of years that many people will look at and say I really didn't think a computer was going to do that now sain Adella wasn't done

### Nadella on AGI and Economic Growth [3:11]

we also have another area which sain Adella spoke about and he spoke out the fact that if we want to look at AGI as an actual definition his definition for AGI is one that is rather different he basically says that look if we want to call ourselves you know the country or the company that created AGI we'll know we have AGI when we have 10% economic growth year on year and I think the reason he's made that distinction is because so many times AI companies are focused on Benchmark hacking and of course there are incentives to do that so that people can see how good the model is but what companies aren't focused on is of course bringing real world value to real world customers and real world companies and if you can do that I then think those metrics are the ones that matter more and that's why this Benchmark of having 10% economic growth I think that one matters is more than just a vague definition of something that can automate a lot of tasks us self- claiming some AGI

### Real-world Value vs Benchmark Hacking [4:04]

Milestone that's just nonsensical Benchmark hacking to me the real Benchmark is the world growing right at 10% the world economy is 100 trillion or something the world grew at 10% that's like extra 10 trillion uh in value produced every single year if that is the case it seems like 80 billion is a lot of money um shouldn't you be doing like 800 billion if you really think in a couple of years we could be really growing the world economy at this rate and the key bottleneck would be do you have the compute necessary to deploy these AIS to do all this work the classic supply side is oh let me build it and they'll come right I mean that's an argument and you know after all we've done that we've taken enough risk uh to go do it but at some point the supply and demand have to map you know you can go off rails completely when you're like all hyping yourself with all the supply side versus really understanding how to translate that into real value to customers and you're not going to say oh they have to symmetrically meet at any given point in time but you need to have existence proof that you are able to Parlay yesterday's let's call it capital into today's uh demand so that then you can again invest maybe exponentially even knowing that you're not going to be completely rate mismatched and so what he was actually talking about there is the fact that of course right now these companies are spending billions and billi millions of dollars on AI infrastructure and he's arguing that look whilst yes it makes sense to spend money on AI infrastructure we have to be wary of the fact that maybe just maybe we could potentially be overspending and we need to at least have some prove that the AI that we currently have is going to provide value to the customers of tomorrow and so that when we're building out all of this infrastructure we're not going to be in a situation where companies have overspent and we have too little demand for what we have I think

### AI Infrastructure Investment Risks [6:01]

it does make sense for companies to have the build out now I think it does make sense considering the implications of AGI and Asi but it will be interesting to see exactly how things are moving forward with regards to Investments I

### Supply and Demand in AI [6:14]

wonder if there's a contradiction in these two different viewpoints because look I mean one of the things you've done wonderfully is you make these early bets when there's you invested in open AI in 2019 even before there was co-pilot and any applications if you look at the Industrial Revolution these um you know six 10 % uh build outs of Railways and whatever things many of those were not like we've got revenue from the tickets and now we're going to whatever a lot of money lost that's true the uh so if you really think like there's some potential here to 10x the or 5x the growth rate of the world shouldn't you just like let's go crazy let's do the hundreds of billions of dollars of compute it's not about building compute it's about building compute that can actually help me not only train the next big model but also serve uh and you understand until you do those two things you're not going to be able to really be in a position to take advantage of even your investment so you have to have a complete thought not just one thing that

### OpenAI’s AGI Levels Explained [7:08]

you're thinking about now if we do go back to the AI definition I do think it's always good to refer to opening eyes Internal Documentation that was leaked through Bloomberg essentially this was what they have of course I'm pretty sure you're familiar with this but currently they are at level three which is of course agents and systems that can take actions and I think by next year the agents are going to get really good when we have a bunch of different workflows and that's something that I'm personally working on in my AI Academy it's going to be linked in the description but I'm working on tons and tons of EI agent workflows that actually allow you to automate many different tasks but the point here is that we have five levels and the level four is of course innovators AI the Canadian invention and of course level five which is probably going to be many AI systems together doing the work of an organization and you might be asking okay so we have the AGI definition how do other CEOs think about that and

### CEO Predictions on AGI Timeline [8:02]

what do they think about AGI well recently on a podcast the anthropic CEO actually spoke about this and he said he thinks that AGI will be here by 2027 extrapolate the curves that we've had so far right if you said say well I don't know we're starting to get to like PhD level and last year we were at um uh undergraduate level and the year before we were at like the level of a high school student again you can Qui with at what tasks and for what we're still missing modalities but those are being added like computer use was added like image in generation has been added if you just kind of like and this is totally unscientific but if you just kind of like eyeball the rate at which these capabilities are increasing it does make you think that we'll get there by 2026 or 2027 again lots of things could derail it we could run out of data you know we might not be able to scale clusters as much as we want like you know maybe Taiwan gets blown up or something and you know then we can't produce as many gpus as we want so there are all kinds of things that could derail the whole process so I don't fully believe the straight line extrapolation but if you you'll you we'll get there in 2026 or 2027 now

### Andrew Ng's View on AGI Timelines [9:17]

interestingly while doing research for this video actually found some people who are very different in terms of their opinion and I don't want to say this to sort of provide any kind of divisiveness but I do find it very interesting that companies that are raising significant amount of money often have shorter timelines than the ones that are more established so for example companies like openi are saying AGI is nearer and not that I think AGI is farer I do think the incentive of them is to say that AGI is close but of course people like Andrew ngg the American Computer scientist he actually says that AGI is quite far away I think AGI the standard definition of AGI is AI that could do any intellectual task human can so when we have AGI AI should learn to drive a car or learn to fly a plane or learn to write a PhD thesis in University for that definition of aggi I think we're many decades away maybe even longer I

### AGI Complexity Explained [10:13]

hope we get there in a lifetime but I'm not sure one of the reason that there's hype about AGI in just a few years is there's some companies that using very nonstandard definitions of AGI and if you redefine AGI to be a lower power then of course we could get there in one or two years but using the standard definition of AGI of AI to could do any intellectual task human can I think we're still many decades away but I think it'd be great if we manag to get and remember how was talking about what other CEOs do talk about AGI we have demon sbis saying it's actually 3 to 5 years away we've been working on this for more than 20 plus years um we've sort of had a had a consistent view about AGI being a system that's capable of exhibiting all the cognitive capabilities humans can um and I think we're getting you know closer and closer but I think we're still probably a handful of years away okay and so what is it going to take to get there so the models today are pretty capable of course we've all interacted with the language models and now they're becoming multimodal I think there are still some missing attributes things like reasoning um hierarchical planning uh long-term memory um there's quite a few capabilities that uh the current systems uh I would say don't have they're also not consistent across the board you know they're very strong in some things but they're still surprisingly weak and flawed in other areas so you'd want to an AGI to have pretty consistent robust Behavior across the board all the cognitive tasks and I think one thing that's clearly missing and I always had as a benchmark for AGI was the ability for these systems to invent their own hypotheses or conjectures about science not just prove existing ones so of course that's extremely useful already to prove an existing maass conjecture or something like that or play a game of go to a world champion level but could a system invent go could it come up with a new reman hypothesis or relativity um back in the days that Einstein did it with the information that he had and I think today's systems are still pretty far away from having that kind of creative uh inventive capability okay so a couple years away till we hit AGI I think um you know I would say probably like three to five key component of an AGI system but probably not enough on its own I think there's still two or three big Innovations needed here to we get to AGI and that's why I'm a more of an on a 10-year time scale than others some of my colleagues and peers and other and some of our competitors have much shorter timelines than that but I think my view is that Demis cabis actually said that you know his time is 10 years so for the companies like Google they have a longer time frame but what about those who you know aren't at Google and those who think that AGI is something that is so far off so I'm always bringing up this interview clip because at the AI action Summit yanen basically says that we are missing a key component of AGI and he kind of shares the same opinion that Demis saus says and that we need certain breakthroughs in order to actually get there now of

### Missing Components for AGI [13:14]

course this isn't about you know 10% economic growth or not but if we're actually going to look at what a system is going to be able to do I think he provides the most realistic you know understanding of the difference between AI systems and what humans can do and he talks about how you know a cat can plan complex actions you know a 10-year-old can clear up the dinner table and F up the dishwasher without learning you know a 17-year-old can learn to drive a car in 20 hours of practice and we have hours and billions and billions of minutes of data for Tesla's for self-driving and yet it isn't perfect yet and you know when we think about you know comparing that to what a human can do like a human can you know learn how to drive a car in a few hours but you know having an AI system that has you know years worth of data and it still is not perfect that should tell us that we're clearly missing a key component of something we're missing something really big because uh you know never mind trying to reproduce human intelligence we can't even reproduce cat intelligence or rat intelligence let alone dog intelligence they can do amazing feits they understand the physical world um you know any house cat can plan very highly complex um actions um and they have caal models of the world some of them know how to open

### Comparing Human and AI Intelligence [14:24]

doors and Taps and things of that type and in humans you know a 10-year-old can clear up the dinner table and fill up the dishwasher without learning zero shot the first time you ask a 10-year-old to do it um yes she will do it any 17-year-old can learn to drive a car in 20 hours of practice but we still don't have robots that can act like a cat we don't have domestic robots they can clear up the dinner table and we don't have level five s driving cars despite the fact that we have hundreds of thousands if not millions of hours of supervised training data okay so that tells you we're missing something really big now something that is probably a controversial opinion and you guys might not understand what I'm saying but I do think that we will have several areas of ASI before AGI because I think AGI is a lot harder than people think and although openingi have said that they know how to build it I want to quickly talk about why I believe we'll probably have many areas of ASI before we even get to AGI because the thing is for Asi you only need one thing to work extraordinarily well but for AGI you need an entire system that works so well it can work that well across anything that humans can do well which is when you think about it pretty difficult now if we refer back to the paper done by Google deep mind we can actually see here that you know what I'm talking about the fact that we're probably going to get ASI before AGI it's actually something that has already happened in a variety of areas you can see here that you know they talk about performance generality if we look at the narrow area which is where we have a clearly scoped set of tasks we already have level five superhuman AI that outperform 100% of humans for example we have Alpha fold which is of course you know protein folding and understanding the protein structures and then of course we've got

### Narrow AI vs. AGI [16:06]

Alpha zero which is that game that is able to play Alpha go basically better than any human has ever played it it's superum then of course we have stockfish which is of course the best chess player in the world and that is essentially ASI for narrow AI now narrow AI is of course you know something that is much easier like I said and I do think that certain companies are probably going to f focus on the applications of this first because it seems like it's much easier if we go over to General AI at the time of the you know the paper was writing we can see that you know at least 50% of skilled adults we still haven't even achieved competent AGI just yet so it seems like this is something that is extremely hard for current AI systems to manage and I do think the promise of AGI might be a little bit weaker than we do think now like I said before you have to remember that open AI actually said we are now confident we know how to build AGI as we have traditionally understood it which is of course a very big

### OpenAI’s Superintelligence Goals [17:04]

statement if you are saying that you know exactly how to get to the architecture that can basically replicate human intelligence that is a bold claim that you will have to back up with of course some kind of product or some kind of proof and up ey have also said which is why I said I think companies might start to focus on this which is why I don't think gbt 4. 5 was that big of a deal is that they're going to focus on super intelligence instead so you can see right here it says we're beginning to turn our aim beyond that to Super intelligence in the true sense of the word we love our current products but we're here for the Glorious future and of course with super intelligence we can do anything else so for me I do think that these companies that are trying to get to AGI I don't know maybe I am on the wrong take for this but I do think that it wouldn't be surprising if we get some superhuman AI systems that are able to just do medical research like a tool that's able to do specifically medical research or you know certain pieces of scientific research maybe biology before we actually get an AI system that can drive your car be in a humanoid robot give you the best advice write the best poems control the computers maybe who knows we'll get it all at once but it's definitely super intriguing and for those of you who are skeptical of the progress it's definitely worthwhile to look at how certain benchmarks are being crushed I do feel like any Benchmark we've had

### Conclusion and Final Thoughts [18:20]

that has been a struggle has pretty much been crushed by AI in 2 years so let me know what you guys think about the AGI Benchmark do you guys think that you know 10% economic growth is better or do you guys think that the other thing is better with that being said and let me know what you guys
