This Is How You Know AGI Is Close...
9:27

This Is How You Know AGI Is Close...

TheAIGRID 23.01.2026 19 947 просмотров 536 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Checkout Free Community: - https://www.skool.com/theaigridcommunity 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Intersted In AI Business: https://www.youtube.com/@TheAIGRIDAcademy Links From Todays Video: https://x.com/ShaneLegg/status/2014345509675155639 Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

So Google said something recently on Twitter that has everyone thinking, have they achieved AGI? Let's talk about it. So this is Shane Le, the co-founder and chief AGI scientist at Google Deepmind. And his recent tweet on Twitter has people in a frenzy with as to whether or not Google have achieved AGI. And this is one of those statements that isn't really hype. And you'll understand exactly why that is the case the more I dive into the video. So what I mean by this is that this is the tweet that he tweeted earlier today and it was about the fact that AGI is now on the horizon and it will deeply transform many things including the economy. I'm looking to hire a senior economist reporting directly to me to lead a small team investigating post AI economics job application and spec here. Now, the reason that this is so profound and most people realize that, okay, most people would look at this as like, oh, it's just another standard tweet is that you have to understand that Shane Le is DeepMind's co-founder and chief AGI scientist. This isn't some peripheral figure making some speculative claim. This is the guy who's actually been studying this for decades and has true direct visibility into frontier development. So, when he says AGI is now on the horizon, it carries weight because he's seen the capabilities emerging firsthand. And if you aren't sure what I mean by that, think about it like this. If you're someone who's in the labs with the company Google DeepMind, you'll know when it's time to hire someone to look into post AGI and you'll know when AGI is on the horizon because you're on the front lines of AI research. Now considering the fact that they're deciding to hire someone to I mean maybe it not be so soon maybe it's early maybe it's preventative but saying that look AGI is on the horizon it clearly indicates that maybe timelines are moving a little bit faster than anticipated it does consistently feel that is the theme if you are paying attention to AI but is of course something that you eventually do get used to. Now I think you have to understand that like most AGI discourse focuses on the technical capabilities or safety. Hiring a chief AGI economist to investigate post AGI economics means their modeling scenarios where labor markets fundamentally restructure traditional economic assumptions you know change and productivity tied to human labor breaks down and capital allocation and value creation operate under completely different rules. So we're going to need new frameworks for distribution, taxation and monetary policy. So remember guys, this isn't hypothetical anymore. The fact that resources are actively being allocated now to study post AI economics, meaning that maybe in some of their eternal models, they're thinking that we need to really look at how this affects stuff for the future. Think about it. You're not hiring economists to study theoretical scenarios 50 years out. You do it when you need the frameworks ready within a few years of the relevant planning horizon. Before everyone dives into did they achieve AGI internally? Is that the crazy statement? They do state here when someone asks them what's your best kiss time frame when you say on the horizon. And he says that there's a 50% chance of minimal AGI by 2028 as I've been saying publicly since 2009. And so I think what we should understand here is that he says, you know, minimal AGI. I think that's very clear that that's not full AGI in the sense that it's an AI that's quite like a human. And Google's definition of AGI is an AI that's basically like a human. They did actually make a research paper where they spoke about the different levels of AGI, but for the most part, it is just quite like a human. So, the point I'm trying to make here, guys, is that 50% of AGI by 2028, that is only 2 years away. And the craziest thing is that this isn't a hyper bro. This is someone that's a researcher and they've been saying this since 2009. So, I remember watching clips in, you know, 2023, but they're also saying 2028. So it's quite likely that you know you will have a situation where people start to take this more seriously because it's not a company that has any incentive to lie about these timelines or boast about certain capabilities so that they can get more investor funding. Google doesn't need to make any extravagant statements. These are the same statements being made by 29 and now it's in 2026. You can see that other Twitter users are saying that Google are preparing for AGI and many other AI roles. This is coming quickly. man stack decided to post about conventional economics with utility general equilibrium simply breaks down when AGI arrives. You can see he posted this book called the lost economy which is basically going to be discussing of how the fact that capitalism prior to AGI is the last economy where you have I guess you could say some kind of you know society that is somewhat fair. I mean the problem is that after AGI most people aren't going to accurately predict what it is. I know that I've tried to make some predictions in the past, but I've realized that there are so many unknown unknowns, so many second order, third order consequences, so many different things that you really just cannot fathom. So, it's definitely very hard to predict, but there are, of course, some things that are very clear. Price of intelligence dropping to zero and AI being here clearly has some key

Segment 2 (05:00 - 09:00)

key, you know, implications. Now I also decided to take a look at his tweet from 2024 and you can see that he made the same tweet or a similar tweet about you know AJI is getting closer in 2024 when alpha geometry has been doing some really cool stuff. So IMO is basically where the best high school math students compete geometry problems at that level require insight calculation. You need to construct auxiliary lines, spot non-obvious relationships, chain together multiple theorems in multiple ways is the kind of reasoning that was, you know, basically considered human. And alpha geometry solved 25 out of 30 problems in the past IMO exams. And the average gold medalist solves around 25. 9. So it's pretty crazy. And he's referencing, you know, struggling with those exact problems as a teenager in 1990. So think about it. He's lived through those entire experiences. and he's saying that okay knowing what those problems are experiencing them when they were first there you know I can now see just how far the technology is coming and I know that AGI is clearly on the horizon. Now if you were to ask anyone at Google Deepmind and this is why I say this statement is probably a decent one because like I said compare this to what other AI companies are saying Daria Amade Samman they're all basically saying AGI is 2026 2027 but Deus is basically saying that AGI is still 5 to 10 years away because there are possibly one to two breakthroughs needed. So I think clearly, you know, when we look at companies incentives, of course, AI companies have the incentive to say it's around the corner. It's around the corner, pumping up their stock price. And that doesn't mean they're completely wrong. I'm just saying they don't have any incentive to say it's longer away. They have all the incentive to say it's sooner is than it is. And um Dearis, even in this interview, I'm not even sure if I want to play this, but in this interview, he does say that AGI is probably 5 years away. And I think we have to understand that it is contingent on certain breakthroughs. So, it'll be interesting to see when we do get those. — We've been working on this for more than 20 plus years. Um, we've sort of had a consistent view about AGI being a system that's capable of exhibiting all the cognitive capabilities humans can. Um, and I think we're getting, you know, closer and closer, but I think we're still probably a handful of years away. — And the reason I think that companies like Google and, you know, many AI CEOs don't like giving timelines is because 2 years, 3 years, it's not that far away. And if it does come out that you know they were wrong, of course they will lose credibility. And it doesn't make sense to risk your credibility for a very difficult timeline to predict. So AI progress is inherently difficult to predict because like I said there are unknown unknowns. Companies could explode, supply chains could get disrupted. There are all these crazy different things. So it is very hard to predict the moment that AI will occur. But if you want to predict that, I think it becomes easier when you actually think about the breakthroughs needed. So in the second part, this is what he actually talks about, which makes much more sense. And I do believe that we saw the first inklings of some of these breakthroughs earlier this year. — Okay. And so what is it going to take to get there? — So the models today are pretty capable. Of course, we've all interacted with the language models and now they're becoming multimodal. I think there are still some missing attributes. Things like reasoning, um hierarchical planning, uh long-term memory. Um there's quite a few capabilities that uh the current systems uh I would say don't have. They're also not consistent across the board. You know, they're very strong in some things, but they're still surprisingly weak and flawed in other areas. So, you'd want a an AGI to have pretty consistent robust behavior across the board, all the cognitive tasks. And I think one thing that's clearly missing and I always had as a benchmark for AGI was the ability for these systems to invent their own hypotheses or conjectures about science, not just prove existing ones. So of course that's extremely useful already to prove an existing maths conjecture or something like that or play a game of go to a world champion level. But could a system invent go? Could it come up with a new reman hypothesis or could it have come up with relativity um back in the days that Einstein did it with the information that he had? And I think today's systems are still pretty far away from having that kind of creative uh inventive capability. — Okay. So, a couple years away till we hit AGI. — I think um you know I would say probably like 3 to 5 years away. — But of course, I don't know who do you guys think AGI is going to be achieved by first. I have my bets on probably Google because they seem the most competent in terms of all areas across the board. But let me know what company you think is going to achieve a first.

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник