# OpenAI Employee LEAVES With Stunning Statement! "None Of You Are Ready For AGI!"

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=dnUkkVMngsg
- **Дата:** 24.10.2024
- **Длительность:** 30:26
- **Просмотры:** 64,552
- **Источник:** https://ekstraktznaniy.ru/video/13935

## Описание

Prepare for AGI with me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://milesbrundage.substack.com/p/why-im-leaving-openai-and-what-im

0:00 Brundage Departure
1:17 Leaving Reasons
2:33 AGI Unreadiness
3:44 Safety Concerns
5:19 AI Governance
6:11 Readiness Components
7:47 Technical Safety
9:09 Alignment Challenges
10:41 Regulatory Infrastructure
12:21 Societal Resilience
14:00 Risk Analysis
15:37 Post-AGI Economics
17:20 AI Levels
19:07 Policy Response
21:12 Safety Funding
23:14 Risk Timeline
24:54 Work Impact
26:22 Societal Stagnation
28:03 Capability Gaps

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest 

## Транскрипт

### Brundage Departure []

so honestly can't believe I'm saying this but another individual has left open Ai and they've released some statements that I think you all should be paying attention to so Miles Brundage is a person who used to work at openai as a researcher and manager and he was someone who was working on the AGI Readiness now this is of course very important there are many roles within openai that don't get lot of publicity but they are essential to the future of AI development now recently he released this blog post which essentially talks about the future of AGI and what he's doing next now it isn't a dramatic kind of post but there are a few statements that gives you insights to where the world is on the AGI time frame and I think some of the things that are stated in this blog post are quite concerning despite ai's efforts to mitigate certain issues so unlike prior videos this isn't really an open AI drama video but this is a video that delves into the insights of what exactly is going on when it comes to AGI Readiness and when I talk about AGI Readiness this is just

### Leaving Reasons [1:17]

ensuring that the world is going to be ready for the Monumental impacts that AGI will have many have speculated that this kind of Technology will impact everything from healthare medicine to finance business pretty much anything that you can think of it's going to impact this in some way so quickly I do want to show you guys that the reason that this person is leaving is for issues that are outside the industry rather than inside he said that you know the opportunity costs have become pretty high he's not able to work on exactly what he wants to work on of course he wants to be less biased with what he's working on essentially when you make a statement and you're working at open AI the advice or information that you put out it can be viewed in a certain light considering your working at open Ai and that company has been under major scrutiny for releasing models early and doing a variety of different things that others would consider unsafe practices in AI so I'm guessing that if you want to make certain statements on AI you aren't really able to do that in a way that is effective if you're currently part of a company that contradicts your own position so you can see here he says

### AGI Unreadiness [2:33]

certainly working at openai affects how people perceive my statements as well as those from others in the industry and he says I think it's critical to have more industry independent voices in the policy conversation than there are today and I plan to be one of them now one of the main things slash the main takeaways from this blog post is one of the statements that I wouldn't say shook me to my core but open my eyes to where we are in the AI race a development cycle so there's this question that says so how are open Ai and the world doing on AGI Readiness and he says here that in short neither open AI nor any Frontier lab is ready which is an absolutely incredible statement neither open aai nor any Frontier lab is ready and also the world is also not ready so this is a rather stunning statement because many people and I've said this before would traditionally think that you know okay we're just going to Walt into this super futuristic world that is pretty much Utopia as many people would

### Safety Concerns [3:44]

have described the singularity which they predict is going to happen in 2040 2049 where AI just takes off and Technology advances at a rate which is unprecedented however I've truly understood that AI is something that is in a state where the acceleration levels that has been continuing to increase year on year as more people get into the space companies are buying up more chips and competition from leading Labs become more Fierce it has put us in this position where acceleration seems to be the only thing at the Forefront and of course things like safety have always taken a back seat and due to the capitalistic nature of our society companies are placed in a really rough position where they need to prioritize the most advanced technology over safety and as a result of that we've got neither open aai or any Frontier lab being ready for the future of AI and more importantly I don't think the world is ready for the ramifications of AGI at all now of course he does state that you know this is not a controversial statement among Open ai's Leadership so him stating this isn't something that is a giant Revelation at least for the people that are working at openai that's because they know exactly the kind of technology that they do have and exactly how the world is going to respond to certain technological advances and he also says that that's a

### AI Governance [5:19]

different question from whether the company and the world are on track to be ready at the relevant time though I think that the gaps remaining are substantial enough that I'll be working on AI policy for the rest of my career now one of the things I do want to get into because this is one of the main themes of the video is what does it mean when you say that the world is ready for ai is a multifaceted problem that has many upsides but also many downsides he made a blog post in early 2023 which is called scoring Humanity's progress on AI governance and he says here and this was you know last year that succeeding at AI governance is not merely avoiding catastrophe but building a much better and fairer world and it's pretty doable but it just takes a lot of work and some

### Readiness Components [6:11]

of this is quite scary because he says if we succeed which is of course not certain it seems like it will require some or all of the following ingredients and maybe some others so we need shared understanding of the challenge technical tooling the regulatory infrastructure legitimacy societal resilience and different technological development and you can see here he says I'm considering what scores to assign I roughly imagine that we achieve AGI super intelligence sometime this decade but I've averaged together a range of possibilities within that window and baked in the current trajectory of progress in that area for example an a value doesn't mean that we're ready today but I expect that if things continue roughly as they are then they should reach the a level so this is where he actually breaks down each part so how you can understand what being ready for AGI actually means the first thing is of course this shared understanding and the goal is to create a shared understanding of the upsides and downsides to align perceives interests and incentives he says that this is necessary because shared beliefs in high Collective upsides and high Collective downsides from AI make cooperation more likely and this is basically where you need everyone to understand how good AI could actually be but also on the flip side we need everyone to understand just how bad AI could actually be too I mean it's one thing to say the look AI could cure cancer could cure all disease could cure aging could solve many of the world's political instability problems and many wars and feuds that we currently have

### Technical Safety [7:47]

but on the other side it could also cause Extinction it could cause the division of people it could cause economic impacts that cause some people to be displaced I mean there's a million different things we've got scams I mean we have to truly understand the the full capabilities of this entire category that is AI now he gives this a Grade B because I think the general public agrees on the risks being significant and supports regulation and policy makers are now talking about AI a lot and there are some moves in the direction of taking the race Dynamic seriously next of course is the technical tooling we need to make rapid technological progress in safety and policy research including alignment and interpretability and dangerous capability and proliferation evaluations and proof of learning so that we have the tools at the right time to ensure that risks are managed appropriately if you aren't familiar when I talked earlier about Extinction this isn't some sci-fi level thing we have to understand that if there is a model SL beinging entity that is 10 20 times 100 times smarter than we are the ability for that to do something that is completely above us is going to be easy I always like the example of where we are building highways and then we destroy nature in the process try

### Alignment Challenges [9:09]

explaining to an ant try communicating to an ant why we need to build a highway and that the ant hole SL anill has to get destroyed in the process an ant couldn't fathom what a highway is nor could it fathom what economics are nor could we even manage to communicate our goals and intentions to an ant or a bee the whole point here is that imagine us as ants but a super intelligent AI as the humans in that scenario how an Earth would even communicate its goals and it would do things that are beyond our understanding one of the things that are really you know important is of course interpretability research and of course alignment interpretability is essentially where we can actually understand exactly what is going on in the model so it says here that models are interpretable when humans can readily understand the reasoning behind prediction and decisions made by the model you can see here in this example it basically shows how one is interpretable and one isn't often times we develop models that are blackboxes meaning that we know the output but we don't know what the model is doing on the inside this means that this model isn't interpretable at all and we don't know why it's doing what it's doing now that is of course pretty dangerous if you don't know why a model is outputting the response even if it's good that could lead to a whole host of issues imagine it's picking things via a decision that you have no idea and then when you deploy it starts making random decisions based on different

### Regulatory Infrastructure [10:41]

factors but if you look inside the model and see that okay these reasoning steps for the decisions are correct then we can truly understand what's going inside the model and truly get the right output that we want without any catastrophic risks of course once again a technical tooling for alignment is a giant problem having a system that is much smarter than you be aligned to your goals and have your best interest at yours I mean it's going to be something that is pretty hard we are making progress on alignment but I do remember that there was unfortunately this news where we had the team that was focused on super alignment which is aligning super intelligent systems that we had ilas SATA Jan l this team unfortunately got disbanded by open aai I genuinely don't even remember why the situation was disbanded I do remember Elia satk had issues with openi and has gone off to start his own artificial super intelligence company and Janik was basically stating that look we're not getting enough computing power to even run the tests that we need the evaluation that we need and this clearly isn't a priority for openingi so I'm going to leave and another member of the super alignment team was fired so it eventually just pretty much disbanded now this is of course a bad news for people who are looking at AI safety because if you're thinking okay we need to understand these incredibly smart systems that we're building if you start disbanding the teams that are meant to do that then it means that you know if you're looking for alignment this is something that isn't that great and I

### Societal Resilience [12:21]

would say I do also remember that the team that was meant to do Safety Research on open ai1 they were only given 9 days to conduct safety evaluation tests so I would definitely say that you know this is an area that openi is lacking in he does give this a grade C and he says a fair amount of people are working on this but more people should be working on this and a lot of investment seems inefficient people you know starting random AI safety startups with unclear theories change having to do with management fundraising overhead it's definitely something that you know right now isn't getting a lot of funding I mean if you're in the AI startup space majority of companies that are getting funding are ones that are developing a product that's going to be used by millions of consumers and you have to remember that these grades were given last year so of course we've got regulatory infrastructure which is where you know incentivize sufficient safety by regulating high stakes development and deployment fire mechanisms like reporting and licensing requirements third party auditing of risk assessment all of these things just basically ensuring that you have a comprehensive system that makes sure the entire environment in producing AI systems is one that doesn't result in catastrophic loss of life or any catastrophic events for example if we look at the regulatory bodies that are in the aviation industry we know how strict those bodies are in order for you to get a plane to past all of those inspections the failure rate for certain parts are incredibly small which is essentially why traveling in an airplane is statistically the safest way to travel now you're going to need to construct a similar environment in artificial intelligence but the main

### Risk Analysis [14:00]

problem that we've seen these companies face is the fact that doing so could potentially hamper these company ability to develop systems that are at the level where they are quite useful often times we've seen that when there is tough regulation on these models the products just become pretty much useless as they refuse to do absolutely anything which is I guess you could say just defeats the entire purpose but of course it does say that you know in the recent months policy makers have started paying attention to the issue and it remains to be seen whether the AI Act is going to be you know something that actually works of course we do have the issue of societal resilience which is where we need to ensure that Society is ready and humans have the ability to navigate this turbulent period you have to remember that there's going to be multiple categories all at once changing in a dime we're truly going to be entering a new area of course there's things like having proof of humans and stuff like that says why it's NE necessary Society could be captured by a small group of humans and many copies of AIS by creating the false impression of consensus and reality could Splinter into many Echo Chambers and decision- making in crisis could become impossible there could be civil wars topping of legitimate governments due to economic churn and political misuse of generative Ai and AI could enable totalitarianism which is pretty dangerous because these Technologies enable so much good but also so much bad interestingly enough you can see that he's given Society a great F and this was last year and I don't think there's that much improvement in the space at

### Post-AGI Economics [15:37]

all you can see it says we already have deep fakes TTS scams which is just text to speech and these things are pretty bad because when you have a few people that know how to use these AI tools let's say someone was able to send out a 100 AI agents that are pretty humanlike and they're out just scamming on your behalf they've got you know they can replicate someone's voice they can replicate their likeness I mean it's stuff that is truly concerning and of course you can see here it says that the politics of AI job displacement are about to blow up and policy makers and even most in AI don't really appreciate the extent of this is something that I've talked about on my channel for quite a while you know people aren't thinking about post AGI economics like how is society going to function once technology reaches a certain level there are so many issues from this of course you have the issue that okay we could provide some kind of system where everyone is registered and is you know like a human ID which is Sam alman's World ID which is essentially trying to ensure that when you're on the internet as long as you have your world ID you're verified by a human but it also seems pretty dystopian and even on that video I released I remember most of the comments were saying that look I don't care if Sam Alman doesn't have my data there's no way I'm scanning my eye into an orb and being registered on some futuristic dystopian system so there's just a variety of different issues that are arising all at once and you can see here he says that as a side note I think that AGI is an overloaded phrase that implies more of a binary way into thinking that actually makes sense basically say things that look AGI isn't you know something that you wake up one day and boom we've got super intelligence or AGI it's going to be a gradual change that allows for increasingly complex systems that means

### AI Levels [17:20]

we need to ensure that we're keeping up to date with where these current systems are of course you can see that he says one of the things my team has been working on lately is fleshing out the levels of AI framework referenced here which is of course the levels to AI which of course reasoners I'm going to actually go on it now this is something that went viral we can see that we got level one chatbots level two reasoners level three agents which were slowly creeping up on level four innovators and level five organizations so this is of course how things are going to progress it's not going to be one day that boom you have it although there will be breakthroughs that push the area forward now one of the things that he says here was really important he says that I think Ai and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision makers in governments nonprofits civil society and the industry and this needs to be informed by robust public decision this is something that I've actually talked about a lot like everyone thinks that okay we're going to get AGI and the world is going to be good guys look at our current Society we have homeless people and we have billionaires living potentially in the same town like you've got multimillionaires and yet you've got homeless people in these cities just because technology exists it doesn't mean that it gets distributed equally whilst yes it is fun to think about a Utopia that exists in which everyone is in a better position than they are now unfortunately the reality doesn't always exist like that you can see here he says this is not just true for risk mitigation but also for ensuring equitable distribution of the benefits as is the case with electricity and

### Policy Response [19:07]

modern medicine and this is true okay like government policies you know to ensure that you know railroads electricity you know like you have to ensure that this thing gets distributed evenly because if it doesn't then you're just going to be living in a dystopia where one half is just a super Advanced Society while the other half are just living in poverty which is kind of like what we've got now to an extent now one of the things is going to actually make you realize exactly how bad this situation could be is the fact that he says here okay and he gives a great example he says that I think AI capabilities are improving very quickly basically they need to act more urgently this is one of the things that I agree with policy makers aren't acting quick enough in certain areas okay now he says this is one of the most areas where I'm most excited about to be independent since claims of this effect often dismiss as hype when they come from the industry that's true if you're working at open Ai and you say that look AI is going to be here in you know the next two years and it's going to be able to do XY said people are like ah you know you work for open AI That's a marketing gimmick that type you just need more um billions of dollars investors y y but if you're an independent researcher who's from just like a normal research organization that doesn't really need to say any of this kind of thing it's a lot different coming from someone like that what he says here is that I don't think that most policy makers won't act unless they perceive the situation as urgent and so far that is actually the case or could be in the future now I didn't really get this at first but when we actually look at covid okay there were many warnings about pandemic preparedness and the thing is we were warned multiple times I think there was even a TED Talk by Bill Gates stating that look if the world gets into a pandemic we do not have the infrastructure we need at all to deal with that we've got you know planes going in here we don't have any kind of screening at airports we are just completely screwed if a pandemic happens and then 5 years later we got a pandemic mic and it completely shut the world down with that being said if we get into a situation where AI is able to do the same kind of thing and I mean there's a lot of different risks that come from AI I think we have to understand that look the writing has been on the wall for

### Safety Funding [21:12]

quite some time one of the things he says that's really important is that quantitative evaluations of AI capabilities and extrapolations thereof in combination with analysis of the impacts of certain policies will be critical in truthfully and persuasively demonstrating that urgency basically saying that look we need to really flesh out exactly where this is going and understand with the impacts of certain policies and tell these guys look if we don't do X Y is most certainly going to happen so we need certain policies to ensure that we can mitigate the consequences as soon as possible and the problem with this as well he is that he says I don't think we have all the a policy ideas we need many of the ideas floating around are bad or too vague to be confidently judged for example the CERN for AI and race against competing country as quick as possible which is where you know people are saying that look we need to just try and race ahead against China as quickly as possible because they might do something with the technology and we need to make sure that we're ahead all of these ideas are pretty vague is what he's saying one idea that he says is that you know we need to start quickly in some of these areas and one of the areas that you need to start quickly is of course the US AI safety Institute and that Congress should robustly fund them and the government has more capacity to clearly think about AI policy as well as the Bureau of Industry now one of the craziest things that he says here as well is that the assessment and forcasting of AI progress is rather fascinating because he says this is one of the key foundations for thinking clearly about how the rest of the topics are below of course if you can successfully forecast the abilities of AI you can then have the other areas be a little bit more successful because you're going to truly understand what you need to mitigate or prepare for so he's basically stating that look and this is rather fascinating cuz I wouldn't have thought this but he says that I have a strong sense that feeling the AGI is easier to achieve in industry but I don't know why exactly given that there isn't a large gap between what capabilities exist in labs and what is publicly available to use so

### Risk Timeline [23:14]

that is pretty crazy he says that currently Labs might not be as far ahead as we think they are and this is coming from someone that did work at open AI so it would be interesting to get more information about that but I'm guessing that maybe it means that we are probably only a year behind what these companies have and are currently training and of course you can see here that he says that you know I think improving Frontier AI Safety Systems apologies is something that is quite urgent given the number of companies dozens okay dozens of companies will soon in the next few years at most will have systems capable of posing catastrophic risks basically saying that given that is not much time to set up entirely new institutions I'm interested in the opportunities for Action under existing legal authorities as well as shaping the implementation of already approved legislation such as the eui ACT basically saying that look there's only a few years before dozens of companies have access to Crazy capabilities incredible stuff honestly now one of the things that I think most of you guys should be paying attention to because I think this directly affects you is of course the economic impacts of AI I think that's likely in the coming years and not Decades of course AI could enable sufficient economic growth that an early retirement at a highest end of living is easily achievable but of course this is assuming that appropriate policies to ensure Fair distribution of that Bounty but he says before this okay before we get to that stage there will likely be a period which it's going to become easier to automate tasks which we literally just saw with Claude And in the near term I worry about AI disrupting opportunities for people who desperately want to work

### Work Impact [24:54]

and I think that's one of the things that most people don't realize a lot of the opportunities we have to work is going to be eaten up by AI technology for example think about jobs like graphic design things like writing even side jobs for people like uber people who want to you know do taxi driving for extra cash in the future a lot of those small opportunities to make an extra side income are going to be completely gone due to the rapid proliferation of AI I mean you don't need to hire a writer anymore you don't need a voice actor you don't need someone to drive you around you don't need customer service largely in those areas and largely a lot of the lower roles in many different companies are going to be completely gone or outsourced to AI which is quite bad because it means that those who desperately need work in those areas aren't going to have much to do or might need to find work elsewhere in other Industries now what's crazy is that he basically says that once again people could retire early and he says that you know removing the obligation to work for a living and doing so is one of the strongest Arguments for actually building Ai and AGI in the in the first place and he says that you know some people are going to continue to work in the long term but the incentive might be weaker than before and of course whether this true depends on a variety of factors like cultural and policy factors and of course this is you know something that we need to prepare for and basically he says here okay and this is actually rather fascinating because I'm pretty sure most of you guys know about this movie but he says a naive shift towards a postwork world risk civilizational stagnation see

### Societal Stagnation [26:22]

Wally if you don't if you're not familiar with Wally I'm basically going to show you guys what that is now I know this might be pretty hilarious to you all but basically there is this movie called Wally okay and in this movie you've basically got like this small robot okay and you can see this robot right here okay now this robot is one that is essentially a little helper it's trying to find the first signs of life back on Earth so that I can send the people back but that's not the premise of this entire thing what I'm trying to say okay is that like in this movie technology is so Advanced that people are just literally you like these lazy huge slobs that just don't do anything because ad advanced technology simply does everything for them and I know that it might seem completely hilarious where you're thinking about a future that everyone is just overweight but let's just you know imagine a 100 years from now let's say Ai and Asi can do absolutely everything we've got humanoid robots that most people don't want to work anyway so are we going to reach a point of civilizational stagnation where people are just enjoying the fruits of their labor I mean it's kind of an interesting thought for broken question that I never really considered until this blog post and of course this is where we talk about the Gap starting to appear and this is actually rather concerning he says that it's likely there will by be default a growing gap between the free and paid AI capabilities and I think this is really true because of the new compute Paradigm he says there was a brief period in which these were the exactly equivalent other than rate limits namely when 40 was the best available paid model and also available to use it for free but that era has now passed and will soon be a distant memory as companies try to leverage the test time compute for those willing to pay more I'm not saying that

### Capability Gaps [28:03]

there should be no disparities but that we should be thoughtful about what the right level is and whether we might be sleepwalking into a world of cognitive Have and Have Not which is quite concerning because he's basically saying that look in the future if this entire Paradigm is based on test time compute it's going to be based on who can basically pay more for the you know more intelligent model and if you have more money to access a more intelligent model you're essentially going to get more done and because of that okay because it's not a technology like electricity where there's this base level of understanding the more money you put in essentially the more intelligence you're going to get out of the model and if that Paradigm scale is true it means that those who are billionaires multi-millionaires having access to these cognitive gods that exist on data centers versus those who don't have the money to pay for that compute it's going to be a pretty tough time if you're on the side of a cognitive have not so this is something that we might be unfortunately sleepwalking towards overall he's basically saying that there needs to be a debate about the big picture of how Humanity will ensure Ai and AI benefit all of humanity and the current options on offer aren't very compelling many are too vague to actually evaluate so in reading this blog post let me know what you guys think about this do you guys think that people are ready for the future of AGI I personally think people aren't ready for the future of AI I even launched a community a private Community where I was speaking about the fact that look we actually need to get ready for AI because I don't think anyone is truly prepared for the impacts and that's something that I work on with my group where we literally just focused on how to benefit the most from the transition and the tldr of what that group is trying to do I'm basically like look you need to make sure that you have the right Investments that when AGI is popping off and these companies are barreling forward insane valuations you've at least got some of the right assets and you of course need to make sure that you're saving wealth because in the future I think it's going to be a really strange SL scarce time as we move into the post-work world and it's not going to be exactly clear with as to how people bring economic value to the world long story short if you have an AI robot that exists on a data center that can do 10 times the work you can for 24 hours at 10 times the speed and with 10 times the intelligence how on Earth do you compete that's something that I'm thinking about for the future now that being said let me know your thoughts and theories down in the comments section below I'd love to see what you guys think but as always have a wonderful day
