Elon Musks STUNNING New AGI Prediction
17:44

Elon Musks STUNNING New AGI Prediction

TheAIGRID 26.03.2024 37 494 просмотров 992 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
How To Prepare For AGI - https://youtu.be/LSXpZmo7_Tg 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ Links From Todays Video: Elon Musks STUNNING New AGI Prediciton (AGI By 2025) Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (4 сегментов)

Segment 1 (00:00 - 05:00)

so there was a recent prediction made by Elon musks that was actually quite shocking in terms of what he predicted for the future of artificial intelligence and AGI so what we're going to do is we're going to be taking a look at that prediction dissecting it and absolutely everything going on so the prediction that Elon mus made was that we will have artificial general intelligence at sometime in 2025 as a possibility now take a look at this clip because I'm then going to dissect his statement because he not only says AGI by 2025 as a possibility but he also says some other things as well when things are changing rapidly the ability to predict the future I think is uh becomes a lot hotter because of the rate of change is so great but I think some things are fairly obvious to predict which is that we'll have ai or hii that's at a level that it can really do almost any cognitive I think really not almost really any cognitive task it's just a question of when one could debate is it you know smarter than any human at the end of next year or is it 2 years or 3 years but it's not more than 5 years that's for sure so um yeah and give prediction predictions I'm sort of say giving predictions at the 50th percentile of probability so not not like it will definitely happen but if you say what if you ask me like what's the 50th percentile um where it's like you know you're kind of over under is kind of even that that's where I why I think it's probably end of next year before AI can do better than any individual human could do um and then uh but there's it's a much higher bar to say well is it small than um you know human intelligence collectively but if the rate of change continues uh that's why I think probably 2029 or maybe 2030 is where um digital intelligence will probably exceed uh all human intelligence com buy it yeah that clip was rather fascinating because I think the first half of it where he you know stops and says that you know all humans so yeah I think this was a rather fascinating clip Elon Musk stating that by next year 2025 and that is not far away in terms of AI stating that you know we could really get an AI system that is better than every human at every task you know or the general level and he's of course you know I think even if let's say for example that some people are taking the statement as it's not by next year and some people is taking the latter part of the statement where he's stating that you know it could be within the next four years 3 years what he definitely did say was that it's definitely not going to be 5 years and you know surprisingly I did see that there was a lot of different comments you know to Elon about this on Twitter which I'll show you guys in a moment but when in a moment we also do take a look at you know some of the other industry statements by some of the industry leaders I think you'll find that Elon musk's statement isn't actually one that is actually shocking anymore his one is actually pretty conservative stating that you know we could get AGI by 2025 or 2026 or 2027 which is only 3 years away and is a really short amount of time to con conceptualize for the amount of change that would occur in such a time I don't think that is a surprising statement what I rather think is surprising is the way that the world will change after this technology is made a reality now he also said something at the end and he said that you know by 2029 that was going to be the stage where we would have you know biological compute would exceed all level human equivalent compute and of course you know it would exceed all level of you know human intelligence by that time and 2029 that is to be honest with you guys it lines up with what other people are saying as well and even recently Sam Alman said in an interview that you know he expects AGI to be a system that's capable of delivering new research and that was his definition and I think that's important to understand because if he's stating that we're going to get there by 2029 that means that far before then we're going to be getting AI systems that are going to be capable of really high level tasks high level abstraction high level reasoning and just doing a whole bunch of different things that are going to really shake up many different Industries so that's why this one was getting a lot of attention now there was also some other things and like I said this video is going to dive into four of the main statements that Elon Musk has said and you know this is the first one but I also wanted to include some other stuff as well so I wanted to include some of the industry insiders what they think as well so Nvidia CEO said that you know it's 5 years away that would mark it at 2029 that's when he says AGI is going to be there Shane leg uh from Google deep mine the co-founder and chief AGI scientist the person who coined the term AGI said that there's a 50% chance that AGI will be developed by 2028 so that's only four years away and that's a pretty big chance saying that 50% AI will be in 2028 um so that is something that's really fascinating as well and also in addition to this Sam Alman also did recently say that AI could be reached in sometime in the next four to 5 years which is rather fascinating and then like I said as well Mustafa siman now the CEO of Microsoft AI says that you know it's going to be within the next 3 years um and stating that you know it's really possible and will become as ubiquitous as the internet so um I think what the important thing here from Elon musk's first point is that the next three to four years are going to you know really

Segment 2 (05:00 - 10:00)

be Monumental in terms of the amount of societal change that we're going to see because AI is going to be the backbone of the next potentially the next economic revolution in terms of what we see in terms of how value is created how value is distributed and I think this shouldn't be understated at all because this is something that is very very important for people to pay attention to because the ramifications might be good it might be bad but um yeah the next four years you know that kind of system I think that is going to be a very exponential age now Elon Musk also said something as well you know Point number two was something that I found to be pretty fascinating because this was something that Elon Musk said that um you know it just goes to show how crazy things are even exceeding expectations so Point number two was Elon Musk actually discussing the compute and compute is just you know things that power Ai and he said that this is just increasing so fast that he you know doesn't really understand um and most people can't really conceptualize how fast this stuff is moving so I mean I have to give credit to Ray cwell in being actually remarkably um accurate in his predictions in fact if anything like I think he was perhaps a bit conservative uh in his prediction so if you look at the amount of AI compute and the talent that the sort of human talent that is going into Ai and the amount of compute that's going into AI um it's you know at this point it's it appears to be increasing by factor of 10 the AI compute the dedicated AI compute appears to be growing by a factor of 10 every six month you know like so it's like basically close to i' say almost like a 100x Improvement per year at least for the next few years um and AI compute coming online and it seems like probably a lot of the data centers maybe most that currently do kind of conventional uh compute will transition to uh AI compute so yeah this is Elon musk's statement on compute now there are actually two parts of the statement and I want to show you the second part because something recently did happen and it was so fascinating to how uh Elon Musk statement was reflected in a recent announcement so like I said before this is where he's stating that you know the amount of compute that's going on is just absolutely insane he said it's increasing by a fracture of 10x or 100x every year which is uh in terms of like the amount of Hardware that's coming online to be able to power these AI systems I don't think he's lying because we now in that Discovery phase of where you know the broader population has now realized the true potential of AI and I think a lot of people are trying to get in on this because they're starting to realize the transformative nature of this technology and how much it will impact everything and because of that people are starting to invest more they're taking it more seriously companies like meta are shifting their entire Baseline towards AGI other companies are also focusing on the production of that and there is just so much investment going into this that means you know the level of compute is increasing fast now in addition to that because the level of computer is increasing fast that means that the KnockOn effects that are going to be crazy as well now take a look at what Elon Musk says as well here because this is a crazy statement that he says so when you have that level of compute uh growth and um it's sort of on steroids Next Level you know in terms of how much computer is coming online then you're just going to have acceleration that uh is unprecedented in fact I've never seen any technology grow as fast as AI U and I've seen a lot you know I've seen things fast but I've never seen anything this fast so yeah him saying that is pretty crazy because El has been many different Industries and seen a whole lot more than I have but the point is as well is that you know how musk was just talking about you know compute is increasing fast and how this is just really incredible this talk was actually recorded before this announcement so he actually said this before nvidia's GTC announcement where they announced the Blackwell which goes to show the you know 1,000x AI compute in just 8 years which is pretty incredible and considering that we've done 1,000x a compute in just 8 years how much compute do you think we're going to be able to extract when AI starts improving itself in terms of being able to build more efficient chips and chips that are more energy efficient more smaller more dense more compact more efficient in just every single realm I mean you know what will the graph look like is it just going to go straight up is it going to continue um so Nvidia you know showcasing this and then Elon Musk you know just like a month before cuz the talk was actually recorded a month before and it's only been released now but him is stating that you know compute is increasing so much and then Nvidia just goes ahead and releases this uh this is pretty crazy I got to be honest with you guys this is pretty crazy and from what can see here this level of compute um you know it does look to be exponential I mean you can see 130 flops 620 flops 4,000 flops 20,000 flops I mean it's pretty incredible what we've done with uh gp4 levels of compute and that's why these next level systems I mean if we can extrapolate that out into the future how much are you know how crazy are these next level AI systems going to be I think it's very hard to conceptualize and for someone like Elon Musk saying you know I've never seen anything like this um it's just a bit of an eye open because it leads us to

Segment 3 (10:00 - 15:00)

understand that look all of these people have been in the game for much longer than any of us have been paying attention to it and if they've say they're pretty much stating that you know like we've never seen anything like this level of Compu is unprecedented um I think is something that is truly fascinating in terms of what the future is going to look like now Elon Musk also did discuss you know statement number four uh was that Elon Musk also did talk about his percentage to Doom now if you don't know what that uh P Doom phrase means it basically just stands for the probability of Doom and it's to name as it suggests it refers to the odds that AI will cause a doomsday scenario and essentially it's just you know a scale that runs from 0 to 100 the higher your score the more you're convinced that AI is not only willing to eliminate humankind but in necessary going to succeed at carrying out the task basically your percentage Doom is just a percentage on which you think artificial super intelligence will end Humanity due to our inability to control it you know how sometimes we'll ask an AI to do something and it doesn't understand the task and it does something that adversely kills us um or adversely you know we didn't want that the percentage now he talks about this and I think Elon Musk you know this it shouldn't be understated that Elon Musk has been talking about this for quite some time this is not the first time he's been you know screaming from the rooftops about this since I think as early as 2015 as early as anyone can remember so it's important to remember that he's still basically stating that this is an issue that does need to be solved the Advent of super intelligence it is actually very difficult to predict what will happen next so I think there you know there's some chance um uh that it'll end Humanity I think that's you know like I said I would probably agree with hint that it's about um I don't know 10% or 20% or something like that um and then I you know I think there's I think that the probable uh positive scenario outweighs the negative scenario it it's just that there's a it's difficult to predict exactly um but I think we are headed for um you know as I think is the title of your book abundant uh is the most likely outcome so yeah that's where El musk finishes off the statement stating that it could be entering an age of abundance um and this was something that was hard for some people to conceptualize obviously there's many different Futures with AI um you know I've been discussing a lot and I've been planning a lot for the future of AI because I feel like most people aren't really prepared for what's to come whether this scenario turns out good or whether it turns out to be bad because we know there's a variety of different problems that could arise from AGI you know from companies becoming behemoths um you know from the loss of the meaning crisis to you know just a whole range of things that I generally want to be prepared for um but I think what El mus is discussing here is that of course you know the probability of an AI you know uh going rogue and you know killing humanity is not 0% now there were also some other important you know P Doom statements that I wanted to show you all as well because uh number one this is uh you know the CEO of anthropic talking about his percentage of Doom and basically stating that you know the percentage is not zero and as these models become increasingly more capable they become harder to predict and harder to control and later on after this statement I'm going to show you um a statement from someone at open AI team talking about you know the future AI systems and what they think they're capable of you think about percentage chance Doom I think I've often said that my chance that something goes you know really quite catastrophically wrong on the scale of you know human civilization you know it might be somewhere between 10 and 25% when you put together the risk of something going wrong with the model itself with you know something going wrong with human you know people or organizations or nation states misusing the model or it kind of induc in Conflict among them or just some way in which kind of society can't handle it that said I mean you know what that means is that there's a 75 to 90% chance uh that this technology is developed and and and everything goes fine in fact I think if everything goes fine it'll go not just fine it'll go really great um again this stuff about curing cancer I think if we can avoid the downsides then this stuff about you know about curing cancer extending the human lifespan um you know Sal problems like mental illness I mean this all sounds utopian but I don't think it's outside the scope of what the technology can do so you know I I often try to focus on the 75 to 90% chance where things will go right and I think one of the big motivators for reducing that 10 to 25% chance is you know how great it'll you know is trying to increase the good part of the pie um and I think the only reason why I spend so much time thinking about the 10 that 10 to 25% chance is hey it's not going to solve itself so this is uh important because if you don't know Dario amod is the CEO of anthropic and not only but he left open AI to focus on safety so these are some really smart people they've managed to do something great with Claude 3 and I think their companies uh you know their ethos their values their constitutional AI the way how it works I think it's really important in terms of you know developing things in a different way because you know like he said someone has to be focused on safety

Segment 4 (15:00 - 17:00)

their percentage isn't going to get sold by itself and it's something that they do need to work on and of course as well you know open ey are also focusing on super alignment they have a team which is focusing on this led by Jan like so essentially there was a podcast in which Jan like actually talks about his percentage of Doom and he actually talks about you know gbt 5 gbt 6 what they're looking for for those systems and how they plan to solve the alignment problems in those system don't obsess about whether you can align gpt2 uh let's work on aligning GT5 and then in collaboration with gb5 we'll figure out how to align GPT 6 uh and then in collaboration with all of them we'll work together to align gpt7 that's kind of the basic idea yeah or like you know and like you want to do this empirically like maybe you look at gb5 and you're like well the system isn't still isn't smart enough right like so we tried this a whole bunch with gbd4 like trying to get it like f tunit on alignment data try to get it help in our research just wasn't that useful that could happen with GPD 5 too but then we'll be like okay let's focus on gpd6 but like you know we want to be on the ball when this is happening and we want to be there you know when this becomes possible and then like really go for it and so I think that's the much more important question to focus on and then if you actually wanted to give a like a probability of Doom like I think the reason why it's so hard is like because there's so many different scenarios of how the future could go and like if you want to have an accurate probability you need to like integrate over this large space and like I don't think that's fundamentally helpful I think what's important is like how much can we make things better and like what are the best paths to do this yeah I didn't spend a lot of time trying to precisely pin down my personal P Doom I suppose I feel my guess is that it's more than 10% less than 90% so it's incredibly important that we work to lower that number but it's not so high that we're completely screwed and that there's and there's no hope and kind of within that range it doesn't seem like it's going to affect my decisions on a day-to-day basis all that much so I'm just kind of happy to leave it there yeah I think that's probably the range I would give to you so yeah that where they talk about uh P doom on a podcast with Jan like it's actually a really insightful podcast I'll leave a link to it in the description it's around 2 hours but it is something that is honestly a fascinating uh insight into what's going on open AI in terms of aligning the Superhuman system so what are your opinions on IL musk's for statement do you think that we're going to be getting AGI by 2025 do you think this is just a bold claim do you also agree with the industry standards that we're going to be basically getting AGI by 20128 2029 do you also agree that compute is increasing you know at a unprecedent ented speed and what is your percentage Doom how large do you think the percentage is that super intelligent AI you know is out of control and has some catastrophe level effect on Humanity so uh let me know what you think about that if you did enjoy the video don't leave a comment down below and I'll see you guys in the next one

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник