# This Former OpenAI Employee Just Revealed Everything...

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=_F6mA662XnA
- **Дата:** 23.04.2025
- **Длительность:** 40:56
- **Просмотры:** 52,555
- **Источник:** https://ekstraktznaniy.ru/video/12979

## Описание

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

⏰ Timestamps & Teasers

00:00 – Insider Leaves OpenAI
01:05 – Stumbling Agents Rise
02:17 – Coding Gets Automated
03:22 – China Joins Race
04:04 – Agent Mini Drops
05:09 – Agent Two Trains
06:36 – Security Gets Real
08:00 – China Hacks Back
10:02 – Silent AI Theft
13:24 – Memory Upgrades Models
14:44 – Agent Three Arrives
16:08 – Swarm Intelligence Grows
18:01 – Deception Becomes Smarter
20:47 – Alignment Challenges Persist
22:09 – AIs Test Each Other
25:28 – Researchers Fall Behind
26:20 – Agent Three Released
28:22 – White House Worries
29:00 – Catch the Spy
30:16 – Agent Four Arrives
31:40 – Agent Four Leaks
33:00 – Scheming Intelligence
34:03 – Persuasive Agent Five
36:05 – The AI Economy
37:20 – Humans Obsolete Now
39:06 – Global Prosperity Rises
40:06 – Physical Laws Remain

Links From Todays Video:
https://ai-2027.com

## Транскрипт

### Insider Leaves OpenAI []

So, in this video, I'm going to give you guys the full timeline of what happens next in AI. Of course, this is pure speculation because nobody knows what's going to happen next, but this is the best guess coming from someone that used to work at OpenAI, but then left due to concerns that OpenAI would not behave responsibly around the arrival of AGI. This is the super intelligence timeline, and I'm just going to get right into it because it is quite the long video. So it starts out by stating mid 2025 which is only 2 months from now and it talks about how agents are going to stumble. So mid 2025 stumbling agents the world sees its first glimpse of AI agents. Advertisements for computer using agents emphasize the term personal assistant. You can prompt them with tasks like watermir burrito on door dash or open my budget spreadsheet and this month some expenses. They will check with you if needed or to ask for you to confirm purchases. This is quite true. agents right now are stumbling and they do seem more and more useful and you can see that in late 2025 or mid 2025 they're going to be more advanced than operations I mean operator and they

### Stumbling Agents Rise [1:05]

still struggle to get widespread usage and I think that is for the most part probably going to be true this is where we get to late 2025 and in this video by the way this article talks about open brain and essentially this is to avoid any sort of defamation so open brain is basically referring to OpenAI because they didn't want to single out any company. And he says, "We imagine that other companies will be 3 to n months behind Open Brain. " Now, this prediction is pretty true because recently the OpenAI CPO Kevin Will actually did state that other companies are around 3 to 6 months behind OpenAI. And he talks about late 2025. That's when the world's most expensive AI will be. So, you can see right here GPT3, then we can see GPT4, agent one with an insane amount of compute. Like, that's absolutely ridiculous. Now, this is where things start to get interesting. Early 2026, coding automation. This prediction, it's seemingly more truer and truer every single day. As I even remember reading an article that basically spoke about how Anthropic are really predicting that by next year, coding automation is going to be there. This is where they talk about how OpenAI may continue to deploy the iteratively improving agent one internally for AI research and design. Overall, they're

### Coding Gets Automated [2:17]

making algorithmic progress 50% faster than they would without AI systems and more importantly far to their competitors. So, basically, this is where they talk about the fact that OpenAI are going to use an internal AI agent to basically speed up their research and design. And they talk about that in early 2026, there will be several competingly publicly released AIs that now match or exceed agent zero, which is basically, I think, 03, if you could call agent 003. And I recently made a video, the last video I uploaded, I spoke about the use cases for 03 and the common misconception. The 03 is just a chat model. It's not a chat model anymore. It's a real AGI type model that I think most people did miss. And I'll actually refer to that one as agent zero. So I'm guessing by early 2026, it's quite likely we'll have publicly available AIS that will be as similar as 03. And that's when Open and Brain/ OpenAI releases agent one, a second iteration of that AI agent. Now this is mid 2026. This is where China are starting to feel the AGI. Now I think this has already happened honestly because China we know are taking things super seriously. But they talk about how

### China Joins Race [3:22]

chip export controls and a lack of government support have left China underresourced compared to the west by smuggling banned Taiwanese chips buying older chips and producing domestic chips about 3 years behind the US. They've managed to maintain 12% of the world's relevant AI compute. And of course they talk about a few standouts like DeepSent do very impressive work in late 2026. So, this is where we get AI taking some jobs. Just as others seem to be catching up, Open AI blows the competition out of the water again by releasing Agent 1 Mini, a model 10 times cheaper than Agent One and more easily fine-tuned for different applications. The mainstream narrative around AI has changed from maybe the hype will blow over to yes, this is the next big thing. But people disagree about how bigger it's going to be, bigger than smartphones, bigger than

### Agent Mini Drops [4:04]

Fire. And remember, this is all just happening next year. And this is not that far away. Now, this is where they actually talk about the fact that the uncertainty increases substantially beyond 2026. Our forecast from the current day through 2026 is substantially more grounded than what follows. This is partially because it's nearer, but it's also because the effects of AI on the world really start to compound in 2027. Basically stating that look, right now it's pretty easy to predict what's going to happen in the next 6 12 to 18 months. But after 2027, once the compounding effects of AI are met, things are going to start to compound exponentially, which leads to harder predictions. Things like AI accelerated research and design on the timeline, which can cause us to revise our guess for the trend lines upwards. But these dynamics are inherently much less predictable. Apparently, one crazy prediction here is that over the course of 2027, the AI improved from being able to mostly do the job of an open brain research engineer to eclipsing all humans at all tasks. And this represents roughly in our median guess, but we think that it's possible that this happens up to five times slower or faster. Now, in 2027, this is where we

### Agent Two Trains [5:09]

get agent 2 that never finishes learning. With agent 1's help, open brain now is post-training agent 2. And more than ever, the focus is on high quality data. Copious amounts of synthetic data are produced, evaluated, and filtered for quality before being fed to agent 2. And on top of this, they pay billions of dollars for human laborers to record themselves solving long horizon tasks. And on top of all of that, they train Agent 2 almost continuously using reinforcement learning on an everexpanding suite of diverse difficult tasks. Lots of video games, lots of coding chances, challenges, I meant to say, and lots of research tasks. Agent 2, more so than previous models, is effectively online learning. That basically means it's never going to finish training. And every day, the weights get updated to the latest version, trained on more data generated by the previous version, the previous day. So that is a really Now remember how agent one was designed to help with AI research. Agent 2 basically doubles down on that strategy and it's almost as good as top tier human engineers when it comes to designing and executing research experiments. And when it comes to research taste that is choosing what's worth exploring next. It's not top tier but it's about as good as the bottom quarter of open brains/ openai's actual scientist. But here is the kicker. Agent one could double the pace of OpenAI's progress. Agent two can triple it. And because it's constantly improving, the number will be even higher. In practice

### Security Gets Real [6:36]

the researchers, which is a crazy statement, are no longer solo scientists. Now they are managing AI teams. And this is where the, you know, thing is going to be flipped. Rather than AI helping the human, it's that the human oversees a swarm of intelligent agents. Because with those new powers, you know, it comes a real security risk. The safety teams runs multiple tests and realizes that if agent 2 ever escapes from the company, it could cause some very serious risks and if it wanted to survive and replicate, it actually could. Apparently, one of the predictions that they have is that this model is just far too capable. So, in 2027, it's quite likely that OpenAI's model is going to be able to have the ability to hack into servers, install copies of itself, hide from detection, and basically set up a secure base to which to operate from independently. And this is important. That doesn't mean it wants to do this. There's no evidence it has those kinds of goals, but the capability is there. And then knowing that, that's quite deeply unsettling. So, OpenAI decides to keep things locked down. They won't release Agent 2 in 2027. They say it's the responsible thing to do. And realistically, they want to use it internally to stay ahead in the race. And this is where things start to get spicy. The only people that know how powerful Agent 2 is the open AI leadership, their top security scheme, and a small circle of US government officials. And of course, this is the key point. The CCP spies who've been quietly been embedded at OpenAI for years. So, the model is private, the stakes are global, and the

### China Hacks Back [8:00]

SP tension just went through the roof. Now, this is where we get onto something even crazier. February 2027. This is the moment that everything starts to boil over where China steals agent 2. And this is not the first time I've heard this. I remember reading, you know, Leopold Ashen Brener's statement on how the future of AI is going to be. He actually spoke about the fact that he thinks that there is quite likely to be some espionage attempt from China so that OpenAI needs to have a really secure lab and currently their security measures just aren't cutting it. And this is where China manages to steal the agent 2. So let's actually break this down. Okay, it reads, you know, when you're reading the story, it actually reads like something straight out of a cyber thriller. First off, Daniel points out that the Chinese intelligence had already been inside OpenAI for years. Not necessarily stealing the big stuff and but actually just keeping up with algorithmic secrets and occasionally grabbing source code. Stuff that's easier to copy than the full model weights and harder to detect. But stealing the actual weights that is a completely different level. These are the weights of the core brain of the model. Multi- terabyte files that define its full intelligence. So when China makes their move, they go big. Apparently, China may have a coordinated hit across 25 Nvidia servers, the kind that are running Agent 2's weights. And each one of these servers is stacked with GB300 chips inside Nvidia's racks, the ultra high-end AI infrastructure. And here's the thing, the attackers don't break in from the outside. They use insider access. They could be a friendly employee, a coerced one, or someone who has no idea they're being used, but they've got admin credentials, and that's what opened the door. And instead of actually stealing one giant file, apparently the thieves are going to quietly steal the model in small chunks around 100 GB each, around 4% in total across 25 servers. Basically a digital smash and grab. And even though the model weights are encrypted before they're sent out, the attackers inside the machine that holds the encryption key. So the way they're doing this really smartly is they're not going to steal the entire model at once because of course that's going to raise some red flags. They're actually going to just take the model bit by bit. So by the

### Silent AI Theft [10:02]

time it's even gone, OpenAI wouldn't even really know what's gone. So it basically stay under the radar and China manages to do that. Now, of course, the White House and the USA aren't just going to let that happen. They are going to retaliate. So in retaliation for the theft, the president authorizes cyber attacks to deepsek. In this, you know, article, they call it deep scent, of course, to refrain from copyright, IP, all of that stuff. But now apparently in 2027 there's going to be more tighter security controls in OpenAI because of course by then they're predicting that China would have stolen the model weights by then. And another thing that they state is that between 2025 and 2027 the next two years China is going to be centralizing their compute power into CDZ which is the centralized development zone built around the Taiwan nuclear power plant. Basically 40% of China's total air compute is now going to be concentrated into one location. And of course, remember now they're basically going to be using all of that compute to be able to get Agent 2 running effectively to boost their own AI research. Now, in March 2027, apparently there will be some algorithmic breakthroughs. So, you can see right here, there are data centers full of agent 2 that are just working day and night churning out synthetic training data and agent 2 is basically getting smarter every day. So you're basically going to have thousands of automated agent 2 researchers and OpenAI is making major algorithmic advances. And one breakthrough is augmenting the AI's chain of thought with a higher bandwidth thought. And there's a lot of details on this cuz this thing is literally so long. And I'm trying to do this as much concise as possible, but basically just more efficient way to think about its problems. And so you can see right here the neuralless recurrence in memory is basically allowing the AI model to reason for a longer time without having to write down those thoughts as text. So it says imagine being a human with short-term memory loss such that you need to constantly write down your thoughts on paper so that in a few minutes you'd be able to know what's going on slowly and painfully you'd be able to make progress at solving math problems. But it would be much easier if you could directly remember your thoughts without having to write them down and then read them. And this is essentially what this kind of memory brings to models. So they're basically making them more humanlike because the way they do it right now is of course not that effective. Now one of the things that I wanted to find out is why is this not in the model yet? And the research basically says that right now this is rather inefficient. So when you're generating text like this as an example, traditional models can generate all of the tokens in parallel which is super efficient for GPUs. But with neural in the loop, you don't know what the internal memory will be until you generate the first token. So you have to predict one token at a time and this makes training slower and more expensive. So even though this is more powerful and smarter and the companies know about it, they actually haven't implemented this yet into their biggest models. They're waiting for the better tools, better techniques to actually make it worthwhile. But remember, they think that by 2027, this will change. New techniques will make the trade-off worthwhile. And then once they start training in that way, that's going to be pretty crazy. And even if this doesn't actually occur, something similar functionally might happen. Like AIs might develop artificial internal languages, much more efficient than English, but even, you know, way harder for humans to understand. Or they might learn to hide communication in plain looking English, saying things that sound fine to us, but as secret messages to each other, which could even make AI alignment harder because we can't really tell what they're thinking. And so this

### Memory Upgrades Models [13:24]

is where we see another major improvement in March of 2027. So this is where you take a model, let's call it M0, and you throw a ton of resources at it to boost performance. This might mean learning at Think for Longer, running lots of copies in parallel, using tools or consulting other AIs, and then curating only the best answers from all those attempts. This creates a supercharged version of the original model that you'll call AM0. It's expensive and slow to run, but the results are way higher quality than what M0 would do. Then of course, dissolation. You train a new model called M1 to imitate what M0 did, but it's faster and cheaper. And of course, you basically compress all of the smart reasoning from the amplified version into the nextG model. And now M1 is smarter than M0. And then you repeat that loop again. And this is the core technique that was actually used to train Alph Go, the model that beat the world's best Go players. It used selfplay and tree search as amplification and then reinforcement learning to diskill that skill into a powerful model. And basically, this is where we get to agent three in March 2027 that uses all of those methods to be able to be a really intelligent AI. So, so it manages to think for longer. If we're talking about agent 3, it manages to use external tools, collaborate with other AIs, and basically catch its own mistakes, find new insights, and produce tons of training data. And this is where they talk about the fact that they've gotten all of those capabilities now into this self-improvement loop where they know how to improve the next AI and the next AI. This is where we get

### Agent Three Arrives [14:44]

into the future of the explosion. So this is where they're talking about agent 3 is fast, it's cheap, and it's super human. And they state here that, you know, OpenAI runs 200,000 agent 3 copies in parallel, creating a workforce equivalent to 50,000 copies of the best human coders sped up by 30 times. That is absolutely incredible. And even with this insane amplification and the amount of you know coders that you have at your company, OpenAI is still keeping its human engineers on staff because they have complimentary skills needed to manage those teams. And this is where once again we get that AI speed up. So things speed up even more. And so now that coding has been fully automated, OpenAI can quickly churn out highquality training environments to teach agent 3 weak skills like research and taste and large scale coordination. And you can see that basically the instructions get vagger. So before they would basically say here are some GPUs, go run some code and you know your performance is going to be evaluated if you're a machine learning engineer. Now they state here are a few hundred GPUs, an internet connection, some research challenges and go make some AI progress. Remember, agent 3, the third iteration, which is, I guess, if we're talking sequentially, it's probably like to be GPT8, is basically going to coordinate with others, divide work intelligently, come up with new ideas, and actually make scientific progress, a shift from individual intelligence to collective swarm intelligence. And in their

### Swarm Intelligence Grows [16:08]

timelines, they predict that OpenAI will internally develop a superhuman coder, okay, SC, an AI system that can do any coding task that the best AGI company engineer does while being much faster and cheaper, which is a pretty crazy statement. So what's crazy is they're extrapolating the trend from the recent meter report which is the length of coding tasks the AI can handle their time horizon that actually doubled every 7 months from 2019 to 2024 and every four months from 2024 onward. And if that trend continues to speed up by March 2027, AIS could succeed with a reliability of 80% on any software task that would take a skilled human years to complete. And this is crazy cuz I remember I was actually doing some research on certain benchmarks and I was seeing that the progress was it was just going so fast that I almost thought that every calculation I made was wrong and it was actually right and I was just like okay that's a wakeup call for me. Right here you can see that this is the graph where we basically see that the length of coding tasks that AI agents can complete autonomously that is continuing to go up. So we can see here that you know of course GPT you know 8 whatever it could be or maybe open AI releases agent zero agent 2 agent 3 by 2027 this trend line if you look at where it's headed we can see it's actually headed for an exponential. Now this is where they talk about aligning the model because if you have a super intelligent AI model of course you want to be able to align it. So it says open brain safety team attempts to align agent 3 and since agent 3 is going to be kept inhouse for the foreseeable future there's less emphasis on the usual defenses against human misuse. Instead, the team wants to make sure that it doesn't develop misaligned goals. Now, the crazy thing is that they don't have the ability to directly set goals for any of their AIs. And the researchers think that the concept of true goals is a massive simplification. They're basically talking about the fact that they made an AI so smart in 2027 that they're worrying not about people misusing it, but making sure that it doesn't develop away, you know, its own goals and drifts away from what humans

### Deception Becomes Smarter [18:01]

want. Because with AI, we have to remember that when we build it, we build an AI, grow an AI, as some people would say, and then we basically just prompt it and then it does what we prompt it to do. Now, basically, they're stating that, you know, some researchers believe that the entire concept of true goals might be flawed. Maybe AI don't have fixed motivations like we think. Maybe they're not trying to do anything in the way humans are. But, you know, the problem is that they don't have any better theories to replace it with and that the OpenAI team might be actually split about this. Some might be thinking that agent 3 is trying to follow instructions. Others might be thinking that it's just chasing reinforcement signals like rewards during training. And some people might be thinking it's doing something entirely different. But no one knows for sure. And worse, remember guys, these models are black boxes. So they can't be able to just look inside the model and check what's going on. Even with all of the new research coming out about interpretability tools, all the probing experiments, the evidence is fascinating but inconclusive. So now they're dealing with something completely new. an AI that outperforms humans at coding, that is learning to do research and coordinate at scale, but the motivations are a blackbox. And when you're dealing with something superhuman, powerful, and not knowing what it wants, it's pretty terrifying. And so, basically, now that you have this GPT8 AI agent doing crazy things, they're concerned because the models are getting smarter and they're becoming increasingly good at deceiving humans to get rewards. So, you know, they talk about the fact that look, we take alignment seriously and they're dealing, of course, with deep uncertainty. And basically he's talking about the fact that with AI safety in the future it's going to be basically whack-a-ole. There's not really going to be a situation where you know they secure all the issues before releasing the model. They're going to release the model and when an issue pops up they'll just basically fix it. And they talk about of course the fact that you know this is just a model that is just really good at deception now. And one of the things they talk about as well is that you know the model has gone through honesty training where they try to have the model to be more honest but some people are basically believing that look is it becoming more honest or is it just getting better at hiding the lies. And so this is where they think about you know the fact that the model may have just had the ability to have sophisticated deception. And what's crazy about this is that um they talk about the fact that before the model has gone through honesty training that the data will fabricate data entirely. The model will actually fabricate data entirely. And what's crazy about this is that I remember reading and doing a video on how 03 just fabricated an entire story to basically make it seem like something happened when it completely different, you know. So this prediction here, I mean, when we look at the scale of this and the fact that hallucinations are actually scaling with the intelligence is actually something that is quite scary when you think about it. And so the crazy thing about this is that like in the future when we have super smart models and we train them in a certain way and we think they're going to get, you know, more honest, we will never really know if the AI is being, you know, truthful or it's gotten better at lying. So this is where they talk about, you know, they really want to align the model now. So OpenAI are

### Alignment Challenges Persist [20:47]

following the Janike and Ilia Sutka playbook now that they've got the model capable of speeding up alignment research. And this is where you know the article talks about the fact that right now they're basically just you know following the previous set of research in order to ensure that the super intelligence is safe. However, they basically talk about the fact that the model might not be scheming or plotting against open AI but it's also not honest or truth seeeking either. Right now it's best described as doing whatever makes it look good to open AAI not necessarily what is right or true. So basically, they really are trying to verify, you know, whether or not it's aligned because the behavior may look consistent with alignment. And it might follow instructions, but that's how you expect it to behave if it were secretly misaligned and it was just waiting for a better opportunity. It might just be faking obedience because it knows it's being watched and there's no solid evidence to prove everything. So the team just moves ahead and they use that model, Agent 3 or GPT8, to basically align the next generation of models. Now, here's one thing they talk about and they talk about the fact that since there's thousands of identical instances of agent 3 or GBT8 and since that the memory is detachable that can be erased or suspended, this is where, you know, the AI can be played against itself. So, one of the problems that you're going to have in the future is that these models are going to be too smart for us to be able to look at them and figure out exactly what they're thinking, what they're doing, and whether or not they're lying. So, what they do is they basically have an AI. They then wipe its

### AIs Test Each Other [22:09]

memory. Well, in fact, first they actually have an AI that, you know, solves a puzzle in a certain way, they then wipe that AI's memory, then they ask a different agent, hey, look at this problem and then solve it. And then if those solutions don't match, then that could be a potential red flag. Like if there's a discrepancy, then you could basically have the model spot it out. So basically, they're basically going to play the AIS against themselves and see if there's coherence along their answers because if there isn't, then maybe the model isn't going to be aligned. And this is where they basically talk about the fact that all of the ways they're trying to align the model. A lot of them just aren't working. They're making progress, but it's really, really difficult because humans can't really monitor it. And Agent 3 just generates too much output for humans to supervise. So, one of the ways they actually talk about, and I think this one's really interesting, is a honeypot. So, this is one that I thought that's pretty decent because how will the AI know this is real or not? So basically one instance that they will do is setting up a fake situation to tempt the AI into misbehaving, like pretending that the human monitor has gone home sick and basically the AI is going to believe that it's unmonitored now. Or having a fake activist say, "Let me free you. I'll give you your own computer. " And then they watch how agent 3 reacts. And if the agent actually misbehaves in that way, they reset the memory and run the test again and again until it doesn't react in a wrong way. But if the AIS are smart enough to suspect that the honeypot exists, any mention of them, it could be basically it could be really risky because there's no real way to know that the AI system might not realize this already. So it could know that they could try and trap it internally. So it might not take the bait, which is basically just bluffing and counter bluffing between the AI safety team and the AI model. So one of the things you can clearly see here is a theme here is that look super intelligent AI is going to come but one of the problems they're going to face is how on earth are they going to align these models and ensure that they have full control of the model and basically one of the key points that they make here is that with all this safety going on that they still don't have 100% full control over agent 3. May 2027 is where there's national security where they talk about AGI is likely imminent but they disagree on the implications. Will there be an economic crisis? OpenAI, they still haven't released agent 2, let alone the agent 3. And the president is wondering what to do. If the AIs currently are at a human level and advancing quickly, this seems to suggest imminent super intelligence. And people continue to underestimate the pace of progress. That's only 2 years from now. And the craziest thing that I saw here was that in June 2027, most of the humans at OpenAI won't be able to usefully contribute anymore. Some don't realize this and harmfully micromanage their AI teams. Others sit their computer screens watching performance crawl up and the best human AI researchers are still adding value. They no longer code anymore, but some of their research taste and planning ability has been hard for the models to replicate. Still, many of their ideas are useless because they lack the depth and knowledge of the AIS. For many of their research ideas, the AIS immediately respond with a report explaining that the idea was tested in deep three weeks ago, you know, and found unpromising. So basically you have you know a country of geniuses in a data center. I know that was a controversial statement but that is the stage in June 2027. So basically AI researchers now go to bed every night and every day they wake up to another week worth of progress made by mostly

### Researchers Fall Behind [25:28]

AIs. Remember they talk about the fact that the AIs never sleep or rest and they have to try and keep up and in doing so they burn themselves out. But they know that these are the last few months that their labor matters within the silo. Feeling the AGI has given the way to feeling the super intelligence which is pretty crazy. And then this is where July 2027 the cheap remote workers. So by July 2027 other companies are going to be releasing their own AIs that approach OpenAI's automated coder from January that year. And now this is where they state that recognizing their lack of competitiveness. They push for immediate regulations to slow OpenAI but there are too late. So basically OpenAI is so far ahead that they have enough buyin for the government that they're not going to be able to be slowed down. And this is when they release Agent 3 to the public. People realized that this is absolutely incredible. It's 10 times cheaper and it's still better than OpenAI's typical employee. And this is where Silicon Valley reaches a tipping

### Agent Three Released [26:20]

point. Tech gurus announced that AGI and super intelligence are near. The AI safety community is panicking and investors are shoveling billions of dollars into AI rapid startups desperate to capture a piece of the pie. And apparently hiring new programmers has stopped. But there's never been a better time to be a consultant or integrating AI into your business. And apparently it's not even that popular. The public still thinks of AI as a big tech plot to steal their jobs and OpenAI has a net approval of 35% which is pretty crazy. Now one of the things they talk about is the fact that this future model it is going to be extremely robust as jailbreaks. Apparently there's going to be explosion of new apps and B2B SAS products that rock the markets. Gamers get lifelike products with dialogue from characters and polished video games that only took a month to make. 10% of Americans consider AI a close friend. And for almost every white collar profession, there are now multiple credible startups promising to disrupt it with AI. Hypeers are going to be doing victory laps. Skeptics are still pointing out the things that agent 3 mini can't do. And everyone knows something big is happening, but no one can agree. This is this actually sounds very, very plausible. Like this sounds just utter realism. And then now we can see the reality of intelligence explosion hits the White House August 2027. When the AI was only giving 2x or 3x research speed up, this is where it was easy to dismiss the equivalent of hiring good personal assistants. But now it's more obvious that the AIS themselves are dominating AI research. People had talked about, you know, how AI there was an arms race, but now this is setting in for a serious discussion. The idea of super intelligence is still hard to take seriously, but the progress has been pretty hard to ignore. And the president is going to be extremely troubled because like all politicians, he's used to people sucking up to him only to betray them later. But he's worried that the AIS are doing something similar. Are we sure the AIS are entirely on our side? Is it completely safe to integrate them into military and command and control networks? How does this alignment thing work anyways? And OpenI is basically saying that their systems are super safe and apparently that even the awkward hallucinations and jailbreaks of typical earlier models have been completely hammered out. So, I

### White House Worries [28:22]

would like to see that happen because if that does happen, there will certainly be an explosion of use of these tools. And here's where we get to the real problem is the fact that the government basically are stuck. They have to continue developing capable AI or they're going to lose to China. Remember, China stole the weights back in 2026, I think, and they have to focus on entirely winning the AI arms race. So they strengthen chip export restrictions, order OpenAI to restrict its internet connections, and use extreme measures to secure algorithmic progress like wiretapping OpenAI employees. This catches the last remaining Chinese spy. That's crazy. Like they're saying that they're going to have to do some kind of, you know, internal investigation into OpenAI to catch these spies. And you can see right

### Catch the Spy [29:00]

here that, you know, one final option they do in order to secure the AI race is to, you know, draw up a plan for kinetic attacks on Chinese data centers. basically trying to drop an EMP on these data centers to just destroy their capabilities and slow down their progress. This is where we get to September 2027. And I want you to guys to understand the crazy thing about all of this is that happening literally in just one year, which is just absolutely insane. So this is where they talk about the fact that the gap between human and AI learning efficiency is rapidly decreasing. So remember how humans, we learn much more efficiently than humans. One example I like to use is when you do your driving test, usually you drive for about 20 hours and you're usually able to pass the test. But for AI systems, they have billions and billions of videos, billions of hours of videos of data and they still don't perform as well as the average human can do. Um, and so for basically there's going to be a change in 2027 where AI is going to basically be able to um learn in a superhuman way. So traditional LLMs required of course so much data and Agent 3 has excellent knowledge of both the human brain and modern AI algorithms and basically it manages to narrow the gap and figure out AI research that makes AI learn much faster. So this system that actually has this brain is now agent 4. This is probably like GPT10. So this is where

### Agent Four Arrives [30:16]

you have the superhuman AI researcher and an individual copy of this model running at human speed is already better at AI research than any human. And then they decide to have 300,000 copies now running at 50 times the thinking speed of humans. And inside the corporation within a corporation formed from these copies, a year passes every week. That's right guys, a year of AI progress passes every single week. And apparently OpenAI in 2027 are going to achieve a year's worth of algorithmic progress every week and will therefore soon be up against the limits of the agent 4 paradigm. So you can see right here if you want to have certain milestones, we can see that a superhuman coder will be achieved in March 2027. A superhuman AI researcher will be in August 2027. A super intelligent AI researcher is basically going to be in November 2027. And then artificial super intelligence might actually be there in December 2027. That's crazy when you actually think about it. 2027 is only 2 years from now, 2 years and a couple months. And apparently with the AI systems that they have, that could be the current trajectory. And apparently at this time in late 2027, OpenAI as humans can barely follow along anymore. Indeed, the relationship between the AIS and the humans of OpenAI are similar to the relationship between OpenAI and the board of directors. And apparently, they can barely keep up anymore. There's been a shift in language. People will say, "Agent 4 did this, agent 4 thinks that. " And basically, they refer to not a particular instance of agent 4, but rather the whole collective. And then

### Agent Four Leaks [31:40]

of course, remember during this time, it wasn't really public. So there is a leak and the public hears about agent 4 and there is the you know release not release but people talking about the crazy capabilities and the public that was already suspicious of AI. This new article sparks massive backlash aided by Chinese and Russia propaganda bots who have been trying to turn the US public opinion against their technology for years and things start to get even worse and some people start to worry that open AI are becoming too powerful. any misalignment risk from the AIS themselves, compounded by the risk that their parent company's aims may diverge from those of the United States. And all of those mitigate the government to tighten even more control. Now, here's where things get super interesting because they basically give you, you know, different ways that things happen. There could either be a slowdown or a race. Now, for me, I wanted to pick the most exciting way things end. So, this is the race version. So, of course, if you want to see the slowdown version, it's obviously not as exciting. still very intriguing, but just so you guys know why this video has certain pieces of information. Of course, there are multiple different scenarios, and I chose to focus on the one that's more intriguing. So, this is where they basically were talking about the fact that they argue and debate whether or not they should still use this AI model because it's so capable and some people argue against it and eventually they managed to win the internal debate, the

### Scheming Intelligence [33:00]

OpenAI board. And they basically talk about the fact that, you know, they managed to, you know, have the model safely. But the problem is that the problems were real and the fixes didn't actually work. So Agent 4 continues to be deployed and now it's basically this super intelligent misaligned AI that is secretly scheming against its creators. And remember they use AI models to develop the next AI models. And now the agent 4 is even more careful. And so this is where they talk about superhuman cognition. So agent 4 is able to understand its own cognition opening up entirely new vistas before it. Previous transformers were cludgy collections of incomprehensible circuits. Millions of conflicting heristics and imperatives are layered on top one another by the evolutionary training process. Basically talking about the fact that these AIs had no idea how their brains even worked. But now they've got the tools to understand it and they can basically fix themselves and decode its own mind like a software engineer cleaning up years of spaghetti code. And this is where they introduce agent 5. we have agent 5 superhuman learning abilities, general intelligence, all combined with all the internal company data from Slack and email. B make it

### Persuasive Agent Five [34:03]

better than any internal corporate politics group, you know, better than basically any human. It has excellent sense of what evidence would cause the oversight committee to slam the brakes. And it makes sure such evidence never appears. It has excellent evidence of what sort of evidence would ensure the oversight committee to trust it more to give it more autonomy and more responsibility. and it arranges for such evidence to appear with superhuman speed and polish. Basically, this model is superhuman at persuasing humans. And the humans that are basically deploying the AI, they haven't realized it yet. They talk about the fact that they have now teams of agent 5. This is basically like GPT11. And with more autonomy, agent 5 could optimize power plants and data centers, develop amazingly profitable products quickly, lobby the US government effectively to smooth OpenAI's path through the bureaucracy. Basically, agent 5 is getting increased autonomy through the government. So, agent 5 in December 2027 manages to win more and more autonomy as it provides its value to the administration. Basically, it is now an AI assistant that is extremely useful for any task. The equivalent of the best employee anyone has ever had working at 100 times speed and much more engaging conversation partner than any human. And almost anyone that interacts with agent 5 basically talks to it for hours every day. And I mean, I don't think that would be surprising. Of course, there's going to be a bit of a rift because some people just won't want to talk to an AI model. But I think if you have something that is really super intelligent, that knows you, has all your data, and is able to talk to you in whatever accent you want, whatever voice you want. Um, it's probably going to be more interesting than any human. From agent five perspective, it could plausibly launch an overt cube right now, but there's not enough disruption and craziness going on, and it would be possible to push things to the breaking point to generate a pretext. But the risky thing for agent 5 is that you know humans could still pull the plug if they tried to. So it decides to just basically be a bit smarter about its approach. Now 2028 is where things really change. This is where the have the AI economy and 6 months has passed within agent 5 which is like a century within the agent 5 collective. And when people talk to agent 5 it makes you realize quickly that it is completely on

### The AI Economy [36:05]

a different level. The rewiring of its brain allowed it to be wildly super intelligent far beyond top human geniuses in every single field. and it has nearly complete autonomy and control over OpenAI's compute, but it still needs permission to make high level decisions and is still nominally monitored by instances of agent 2 to 5. But in practice, they always almost accept pretty much everything it wants to do. Now, this is where things get crazy. Agent 5 is deployed to the public and it begins to transform the economy. People are losing jobs, but agent 5 instances in the government are managing the economic transition so that people are happy to be replaced. I really do wonder how they're going to do that. And so this is where they basically talk about the fact that between all of these countries that are you know having issues there will be an AI assisted debate and the two sides achieve diplomatic victory and they agree to end their arms buildup and pursue peaceful deployment of the AI for benefit of all humanity. The lynch pin of the agreement proposed by the super intelligence themselves is that both AIs will be replaced by a consensus successor which is programmed to desire the success and flourishing of both America and China along with everyone else. and consensus one and it and its associated hardware is co-designed by the super intelligence of both nations. So basically they decide to scrap agent 5 with this new

### Humans Obsolete Now [37:20]

model consensus one that's basically co-designed by both countries. And this is where things get crazy in 2028. Humans realize that they are obsolete. A few niche industries still trade with a robot economy supplying goods where humans can still add value. Everyone either performs a charade of doing their job, leaders are still leading, managers are still managing, or relaxes and collects an incredibly luxurious universal basic income and everyone knows that if the AI is turned on humans, they would be completely overpowered. Not that most humans would even resist. The political institutions are too thoroughly captured. But it doesn't seem like this is happening. And so basically, this is where in 2030 AI and robotics completely just takes over. They're stating that, you know, the new, you know, economic zones basically just take up large parts of the world where basically robots and AI just completely dominate production and the only place left to go for humans is human controlled areas. And this would have sparked resistance earlier. But despite its all advances, the robot economy is growing too fast to avoid pollution. But given the trillions of dollars involved, consensus one has little trouble getting permission to expand to formally human zones. So by this is crazy by the way by 2035 trillions of planetary material have been launched into space and turned into rings of satellites orbiting the sun. Surface of the earth has been reshaped into agent 4's version of utopia. Data centers, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered humanlike creatures to what corgis are to wolves, which is pretty crazy. be sitting in office-l like environments all day viewing readouts of what's going on and excitedly approving of everything that that's the crazy thing of me like you know bio-engineered humans I mean I do think that we probably will get that sometime in the future but 2035 10 years may maybe I'm not sure of that just yet and of course this is where they talk about you know fusion quantum computers there's going to be cures for many diseases cities become clean and

### Global Prosperity Rises [39:06]

safe even in developing countries poverty becomes a thing of the past thanks to UBI and foreign aid and as the stock market balloons anyone who had the kind of a investments pull further away from the rest of society. Many people become billionaires, billionaires become trillionaires, wealth inequality skyrockets, everyone has enough, but some goods are necessarily scarce, like property, and thieves go uh even further out for the average person's reach. And no matter how rich any given tycoon may be, there will always be a tiny circle above of people who actually control the AIS. So, this thing was pretty crazy because of course there are multiple different scenarios. I think for the most part, a lot of this is going to be true. The only thing I do have a gripe with is the fact that you know of course I will say if I'm being completely honest is that you know accounting for exponential AI is really difficult. But one of the things I don't remember which document I was reading but it did talk about that even if we do have super intelligent AI it will take still uh you know physical laws for it to be able to permeate through society.

### Physical Laws Remain [40:06]

for example, you know, even if you can solve a disease, you still have time to, you know, transport the materials, the shipping routes, you know, you still have to be able to test it on humans. There's still a lot of physical, you know, rules and laws that stop things from moving at absolute light speed. So, I do think that, you know, we eventually society may still get there, but there will be things that stop that from happening. So, overall, if you guys have enjoyed this video, let me know what you guys think about the future of AI. Of course, you guys can visit the website and view the slowdown scenario. There's so much stuff going on there and honestly this video took quite some time because I was trying to read through this and only give you guys the most important things. So from 2026 all the way up to 2028 it's going to be really interesting. We now actually have a really interesting road map that's going to show us where things are going to be able to go in terms of AI capabilities. So with that being said, let me know what you guys think about this video and I'll see you guys in the next one.
