OpenAI just exposed the future of AI. Here's everything you need to know...
20:04

OpenAI just exposed the future of AI. Here's everything you need to know...

Nick Saraev 03.12.2025 52 120 просмотров 1 958 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
🔥 Join Maker School & get customer #1 guaranteed: https://skool.com/makerschool/about 📚 Watch my NEW 2026 Claude Code course: https://www.youtube.com/watch?v=QoQBzR1NIqI 🎙️ Listen to my silly podcast: www.youtube.com/@stackedpod 📚 Free multi-hour courses → Claude Code (4hr full course): https://www.youtube.com/watch?v=QoQBzR1NIqI → Vibe Coding w/ Antigravity (6hr full course): https://www.youtube.com/watch?v=gcuR_-rzlDw → Agentic Workflows (6hr full course): https://www.youtube.com/watch?v=MxyRjL7NG18 → N8N (6hr full course, 890K+ views): https://www.youtube.com/watch?v=2GZ2SNXWK-c Summary ⤵️ This video explores OpenAI’s prediction that we are facing a societal reset where AI will soon execute "centuries" of human labor in minutes. The speaker argues that the next decade will bring more drastic change than the last 150 years, rendering traditional concepts of work, career paths, and stability obsolete. My software, tools, & deals (some give me kickbacks—thank you!) 🚀 Instantly: https://link.nicksaraev.com/instantly-short 📧 Anymailfinder: https://link.nicksaraev.com/amf-short 🤖 Apify: https://console.apify.com/sign-up (30% off with code 30NICKSARAEV) 🧑🏽‍💻 n8n: https://n8n.partnerlinks.io/h372ujv8cw80 📈 Rize: https://link.nicksaraev.com/rize-short (25% off with promo code NICK) Follow me on other platforms 😈 📸 Instagram: https://www.instagram.com/nick_saraev 🕊️ Twitter/X: https://twitter.com/nicksaraev 🤙 Blog: https://nicksaraev.com Why watch? If this is your first view—hi, I’m Nick! TLDR: I spent six years building automated businesses with Make.com (most notably 1SecondCopy, a content company that hit 7 figures). Today a lot of people talk about automation, but I’ve noticed that very few have practical, real world success making money with it. So this channel is me chiming in and showing you what *real* systems that make *real* revenue look like. Hopefully I can help you improve your business, and in doing so, the rest of your life 🙏 Like, subscribe, and leave me a comment if you have a specific request! Thanks. Chapters 00:00 Introduction 00:38 Overhang & spikiness 09:10 Turing test 14:39 Autonomy

Оглавление (4 сегментов)

Introduction

OpenAI published an article just last month all about the future of work and what AI developments mean for the economy and I think a lot of people have missed it. So in this video I wanted to talk a little bit about that. Then I also wanted to show you guys three major concepts that you can use in order to insulate yourself against oncoming changes to the economy. So first of all here's the source document. It's called AI progress and recommendations. If you guys have yet to read this, I highly recommend it. It's only 7 minutes and 43 seconds to listen to, probably 5 to 10 minutes to read. The three concepts I want to talk about today are this concept of overhang, then the idea of a touring test, and then finally the degree of agent autonomy that we're seeing and will continue to see over the coming years. So directly from the

Overhang & spikiness

article, most of the world still thinks about AI as chat bots and better search. But today, we have systems that can outperform the smartest humans at some of our most challenging intellectual competitions. Although AI systems are still spiky and face serious weaknesses, systems that can solve such hard problems seem more like 80% of the way to an AI researcher than 20% of the way. The gap between how most people are using AI and what AI is presently capable of is immense. There are three main portions to this that I think are worth discussing. The first is that last section there, what AI is currently capable of versus what people think AI is capable of. This concept here I'm referring to as the overhang. So to be abundantly clear, most people think that AI is capable of a fair bit even today. They think AI can do their homework. solve some coding problems. They think AI can maybe make a small discovery here or there. But what AI is really capable of even with current levels of technology is more like this. There is no fundamental reason why we could not tomorrow automate the vast majority of the knowledge economy. AI capabilities are there. AI can do more or less everything now. AI can carry on conversations, sales calls, it could do email writing, it could do spreadsheet analysis, it could file our taxes for us. It could do a tremendous, tremendous amount. And the gap between AI's actual capabilities, whereas people's ability to use AI and their understanding of it over here is what I'm referring to as this overhang. So what does this mean? If AI development stopped tomorrow, we would still be discovering new use cases and applying AI to economize big chunks of the market for probably the next 20 or 30 years. I don't know if you guys have ever seen this movie Threebody Problem. The whole idea there is basically these aliens stop human technology from progressing. But despite the fact that they stopped like theoretical physics from progressing or whatever, humanity was still able to make tremendous improvements in the application of technology. And so it's the difference between sort of like the power of the theory aka the models themselves and then our ability to actually apply them. And them right now is maybe like a quarter of the actual capabilities of these things. And so what does that mean for us? Well, that means that there's actually two major forces that are occurring right now. The first is that AI models are getting smarter. And so their base intelligence will continue to grow and they'll probably continue to grow faster than anything we've seen just because of the exponential returns in investment in this sort of technology. But the second is we're also going to quickly see that overhang or rather the degree to which people think AI is capable of things also improve. And so we're both going to be significantly increasing the rate at which the intelligence is getting better and we're going to be increasing the rate at which we figure out how to apply that intelligence to things. So things are going to change very quickly, very soon. And I'm saying this because I don't know a lot of people are entirely ready for that. The second idea is this concept of spikiness. Now I don't know if spikiness is really well defined here, but the way that I see it is if this circle here, okay, is the human capability, let's say, then maybe back in 2023, AI capabilities looked kind of like this. Okay. And why don't we actually draw this with a different color? That way it'll be a little clear what's AI and what's human. What I mean by this orange circle is this is what a human can do. Okay. Now, if we duplicate that circle in 2024, what I think we quickly realized is that hey, you know, there were some domains in which AI actually did a really interestingly good job. And then in the vast majority of other domains, AI sucked and humans were better at it. Well, then what happened in 2025 is the size of this continued to grow. And what you're seeing is there are some areas where AI still vastly underperforms humans, but there are also a lot of areas in which AI is starting to vastly outperform humans. What we're going to see very quickly is these isolated edge cases that people are picking up. like a couple of months ago was like how many Rs are in the word strawberry or you know AI image models that showed too many fingers or hands or there was a weird sheen on images and stuff like that. All of these like cherrypicked examples that people are choosing where AI performance is actually worse than humans are very quickly going to go away. You know why? Because this spikiness even the worst performance of the spikiness will be significantly better than human performance at basically everything. Okay? And you're going to find that as model technology continues to improve and then you can apply the model technology itself to making itself smarter, you're going to get some things that look really, really weird. Okay? You're going to get some domains where AI is so good that in a few seconds of computation, this model or some future instantiation of a model will be able to do more work than tens of thousands of humans will over the course of their entire life. So this idea of spikiness is important because most people are still judging AI by its least sort of uh powerful components where there are obviously domains in which it is better than the vast majority of all human beings on planet earth. If you look at its math performance on recent tests, Frontier large language models with reasoning vastly outperform like 99. 99999% of human beings or something crazy like that. There are three humans on planet Earth right now which can compete with its ability to perform some specific formula and that's that. And people use that as sort of like their poster card for why human beings are more intelligent than these things. But reality doesn't care about human beings. Human beings are just a blip on the radar of you know technological continuum. Okay. And so in our case what we're going to see is very quickly these technologies are going to go much faster and much much more powerful than we are at most things. The final point I'm going to make still has to do with spikiness, but it's about the distribution of AI skills. I don't know if you guys have ever played, you know, video games where like you pick a character or something and then there's all these different skills. There's like your intelligence skill over here and like your strength your deck skill over here and whatever, you know, a couple of other skills. So, kind of like this spiky overhang idea. Okay, current AI models are heavily concentrated in one specific field. And so if I were to like show you guys what this looks like here, it would probably be oriented all the way over here to like this one specific thing. What is this intex and strength in reality? Well, in reality, the main skill that models are getting really good at is coding. Okay? And there's some other skills like I don't know math and like uh maybe like reasoning and stuff like that. But all of these large model frontier labs are investing like 100% of their resources not to actually make the models better at math or reasoning. They're almost investing all of their resources in just making them better at coding. And a lot of people might be wondering why don't we, you know, take the current levels of technology and then apply them to things like, I don't know, protein folding like we saw just a couple of years ago or I don't know, maybe counseling or psychotherapy or whatnot. things that we can actually use to improve the quality of life today. Maybe uh I don't know even art. I mean we made developments in art but people aren't really putting the their fullest uh sort of intellectual capital in solving this problem. Well, you know why? The reason why is because the next generation of AI model intelligence is almost entirely dependent on our ability to program more complex and more intricate models. So anyway, to make a long story short, these models are sort of heavily overallocating in their ability to code and also their ability, I should also say, to research uh new coding approaches specifically because what they want to do is they don't really want to make like a generally capable AI that can uplift all human beings or whatever the heck their charter is, at least not right now. What they want to do is they just want to make like a really intelligent AI in so far that it can make more intelligent AI afterwards. And the whole reason is, you know, if this is like the pace of technology over time, okay, and like I don't know, man, this is like where we discovered fire or something and then over here is where we discovered the industrial revolution and, you know, steam engines and factories and then over here was like the atomics and then over here uh you know is the internet. What they want to do is they want to make sure that the future um slope of this line is basically just like vertical and the way you do so is you increase the pace of research and R& D. So by solving this one problem, they aim to solve all problems. This other idea plays into

Turing test

that and that's the concept of the Turing test. So directly from the article, when the popular conception of the Turing test went whooshing by, many of us thought it was a little strange how much daily life just kept going. This was a milestone people had talked about for decades. It felt impossibly out of reach. Then all of a sudden it felt close. we were on the other side. We got some great new products, but not much about the world changed. Even though computers can now converse and think about hard problems in natural language. Now, if you guys don't know what the cheering test is, to make a long story short, if you are just chatting with some textbased interface, okay, can you tell whether it's a human being or a machine? Up until quite recently, you could almost always tell it would almost always be, okay, that's definitely a machine, that's definitely a person because the intelligences between these um, you know, natural language products were just so different. Right? Nowadays though, it is extraordinarily difficult. And we're seeing the implication play out large across like academic institutions. We're seeing this play out across publishing. We're seeing this basically everywhere. So I don't really care too much about the Turing test specifically because it's really just a thought experiment. But instead I just want to talk about human sort of intelligence on a big continuum. Okay. So this is intelligence. Okay. Now if you look at intelligence maybe is just some general term. You know I don't want to use the word IQ because I think that's pretty uh heated. But like if you think about it this over here might be like an ant. um I don't know a fish. Over here you might be at like a monkey. And human. Now I think most people when they think about intelligence they kind of zoom in on the graph and look like this. Oh yeah, ants down here, fishes here, monkeys here, humans over here. And they're like that's great, right? We're up at the very top. But the reality is this continuum goes a lot higher up in all likelihood than anybody here probably thinks. like a human being, at least the way that I've drawn this is like barely even halfway up the graph. Maybe it's like a third of the way up the graph if we just look at it as in the y- axis. And so basically what has occurred is you know the amount of time it took for models to get smart like an ant you know and then get smart like a fish monkey. If you just look at like the yaxis here it's a fair amount of time right in order for a model to get like this smart it might have taken us I don't know I mean basically the last 80 years of development. Well, the amount of time it's going to take for the model to double its intelligence and blaze past human functioning might actually only be, you know, like if we talk about like future super intelligence, because that's what these things are when they're smarter than us, they're super might actually only take like a fraction of that total amount of time, right? Instead of it taking 80 years or something, this could take like four or five years. We could be at the point where we're essentially doubling the intelligence of a thing in like four or five years. And that's not even really taking into account the recursive nature of it. So the reason I'm mentioning this is because they use this analogy of this is a milestone people have talked about for decades. This sort of like human level of intelligence. But when you zoom out and you look at it as like an exponential, it takes forever to get the first like 90% of the work. And then after that it's boom, it's zoomed out. Human beings are just one tiny little step on this ladder of intelligence really if we're being real. And there's no fundamental limit or reason to think of why future super intelligences won't be able to just bootstrap their growth. If you think about human beings, we're pretty limited in how we can get smarter, right? Like in some hypothetical intelligence thing, let's say you have like 100 intelligence units. And I'm again avoiding the term IQ because it's so intertwined with so many problems. 100 intelligence units. You can bump this up throughout the course of your life by maybe, I don't know, 10 positively. And then if you make bad lifestyle choices and don't take care of yourself, probably 10 negatively, like you can bump it up by 10 by doing things like, I don't know, exercising, right? By um having a healthy diet, by um you know, maintaining some sort of intellectual stimulation, right? All these different things that you can do to make yourself smarter. But despite the fact that this actually takes like I don't know half of your goddamn day, half of the 16 hours that you have available to like actually do something with like your waking time, um that might only improve your actual intelligence by 10%. Well, guess what? Future models do not have that sort of limitation. Future models can actually rewrite both their software and their hardware. And that's kind of like what we're pushing them towards. So that if even today one of these models has let's say a hypothetical 100 intelligence units its total upper bound aka how much it could like hypothetically make itself smarter or dumber I guess but I don't see any reason why it would do that isn't just 10 you know it might be like plus 1,000 this may be like some sort of log scale. So there's no fundamental sort of limit like there is with people, right? Like we could do, I don't know, you could pop an aderall or you could drink a ton of coffee or you could really try and dial in and like you get a few percentage points smarter temporarily, but these models can realistically rewrite the very infrastructure that they are built on to make themselves faster and more intelligent. And so what we thought was some super far scary unachievable thing has really occurred in more or less the blink of an eye. And future models are going to get smarter and smarter. One final note to this is I'll say if this was an average human, you know, this right over here might be like Einstein. I want you to look at the difference in intelligence on this graph between an average human and Einstein. AI could make that leap in literally like less than a year. And I think in many domains, in many instances, it probably

Autonomy

has. The last major thing I want to talk about direct from the article is in just a few years AI has gone from being only able to do tasks in the realm of software engineering specifically that a person can do in a few seconds to tasks that take a person more than an hour. We expect to have systems that can do tasks that take a person days or weeks soon. We do not know how to think about systems that can do tasks that would take a person centuries. Now, the very fact that OpenAI is mentioning this, I think, is pretty indicative of what they've seen and what they've been working with. I had a thought the other day. I think we probably all use some flavor of like aentic coding platform before, right? Probably like anti-gravity or VS Code or cursor or whatever. Look at how fast that's running. Okay, like when I run an agentic workflow, it operates at a level of I don't know may maybe like when I make a prompt, it'll do a step in like 5 seconds or something and then I'll see an output. When we do this, we're communicating via API. We're naturally limited by the compute that OpenAI or Anthropic or these other providers give to us, right? And so, uh, we think that these models are fast already, but imagine if you were in their headquarters and they allocated a,000 H200 GPUs or whatever just to one single query. You would see that exact same reasoning loop, the five second period of time that it takes for your model to get back to you or whatever, you would see that occur in an instant when I ask my AI model, hey, you know, can you map me out a plan for making workflows that upload other flows to trigger. dev instead of modal? And you see this sort of slow, laborious uh intelligence, which by the way is still way faster than anything that I could realistically do. They have systems that function at a rate potentially hundreds if not thousands of times faster than this. Imagine if you said this and you immediately within 0. 1 seconds got a reply back. Okay, that's what we're dealing with here. That's why they're saying things like what happens if these systems can function for centuries. Well, imagine if you could continue running a system like that for even like 3 hours. The thing about AI is that there are two parts of it that I think we really need to understand. The first is models have intelligence, but they also have speed. Even if hypothetically you had a model that was just as smart as a person, or hell, maybe even a little dumber than a person, if you could run that at 100 times the speed, that thing could do more economically productive work than Einstein could in a lifetime. And it could do that in maybe a few hours or so, simply because of the nature of things like bootstrapping and the ability to modify your own architecture become more intelligent, the ability to instantaneously send 50 API calls and get a bunch of information and then send it back to you. Okay, so the intelligence itself doesn't even have to go up. The intelligence could be the same as humanity, but if the speed is up by 100 times, these systems are capable of going so much faster and being so much more productive. I imagine the reason why they're saying this is because right now we're already at the point where probably the vast majority of all computer programming on planet Earth is entirely automatable with software systems like this. And people are already doing full endtoend like automations of their programming backends entirely just using API inference which is probably orders of magnitude slower than what companies like this actually have access to. We're soon going to be at a point where you are literally being able to do something that a person previously would have taken a 100red years of knowledge work and intellectual capital to finish and we're going to be able to have the outcomes or outputs of that in just a few minutes. So I guess when you combine all three of these, the point that I'm trying to make is the future is going to be a very interesting and very weird one indeed. Now I don't really have any answers to everything that I mentioned about today, but I did want to point out what one of the world's cutting edge frontier uh labs is currently talking about. I don't think enough people are really understanding the implications that this is going to have in their day-to-day life. I don't think that life is going to be the same in the next 10 years as it has been in the previous 10 years. I think that more change will probably occur in the next 10 years than has occurred in the last 150 years. And I think that, you know, if we continue planning for the future in the same way that we've sort of planned for life up until now, this idea of uh I don't know how like a nuclear family with a white picket fence and a dog and so on and so forth, going to work every day and coming home and eating dinner. I think if we continue to operate under these fundamental assumptions, I think the vast majority of people are going to have quite the shock when in a year or two uh you know we have AI systems that as I mentioned can do 100 years of human labor in just a few seconds. So yeah, no prescriptions here, no recommendations or anything like that, but I did want to point out sort of where AI research is at. Rest assured, if OpenAI is thinking like this, other providers like Anthropic and Google and stuff like that are having just as lengthy discussions behind closed doors. But kudos to them for making it public. I think our relationship with work is probably going to have to change pretty fundamentally. And I mean, I'm privileged and blessed uh that I am part of the group of people that are helping minimize that overhang by distributing technology. But I think even that eventually will come to a point where we're going to have to start asking ourselves questions like, "Huh, I wonder if I could just build a workflow that would help disseminate this technology better than I. " Strange. Anyway, hopefully you guys appreciated that video. Have a lovely rest of the day. And if you guys have any questions or comments or discussion points on this, I'd be really interested in hearing them. Just drop them down below and maybe I'll make a video on them, too. Cheers.

Другие видео автора — Nick Saraev

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник