Sam Altman Just Revealed MORE DETAILS About GPT-5 and AGI
21:38

Sam Altman Just Revealed MORE DETAILS About GPT-5 and AGI

TheAIGRID 22.01.2024 86 215 просмотров 1 509 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
💬 Access GPT-4 ,Claude-2 and more - chat.forefront.ai/?ref=theaigrid 🎤 Use the best AI Voice Creator - elevenlabs.io/?from=partnerscott3908 ✉️ Join Our Weekly Newsletter - https://mailchi.mp/6cff54ad7e2e/theaigrid 🐤 Follow us on Twitter https://twitter.com/TheAiGrid 🌐 Checkout Our website - https://theaigrid.com/ https://www.youtube.com/watch?v=QFXp_TU-bO8&pp=ygUVU0FNIEFMVE1BTiBEQVZPUyBUQUxL Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos. Was there anything we missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (5 сегментов)

Segment 1 (00:00 - 05:00)

so in a recent interview Sam mman let some new details about GPT 5 emerge and it leads us to believe that GPT 5 is going to contain more capabilities than we originally thought so in this video we'll dissect the Sam Alman interview and he touches on some rather fascinating points that many people hadn't discussed before but without further Ado let's dive into the first clip in the interview because it is really interesting look when we decided to launch Chad gbt we thought it was going to do well but we had no idea how well it was going to do we knew they were going to be great and eventually but we didn't think they were good enough to resonate like chat GPT and gp4 have this technology even with all of its current limitations is far more useful than we thought and can integrate into our lives in a much more valuable way than we thought now that we know that as we think about launching the next much better models um we come with a different perspective I learned something about uh important and Urgent problems and not letting the important but not urgent ones Fester here's the thing I'm going to go get done in 2024 not what I found fascinating was that clip was actually circling around Twitter and I think people did kind of miss the mark on what Sam Alman meant when he was talking about this because there have been some key changes in how Sam talks about the AI and I mean and when I say AI I'm talking about gbt 4 and gbt 5 and that is because I think what Sam is trying to do is downplay gbt 4's capabilities now the reason that I think he's doing this and this is just a hunch but hear me out I think that Sam Alman knows that you know these open SS companies are catching up to GPT 4's level and meta has even said that they publicly you know are going after the gp4 Benchmark Google has previously gone after the Benchmark with Gemini but I do believe that Sam alman's saying that you know like for example he was saying here that gpt2 was good gbt 4 you know is was okay is basically saying that you know gbt 5 is going to be pretty incredible but of course as we do know something that you do need to remember is that it might be a gbg 4. 5 release because I believe that if these systems are as even good as some of the internal people have claimed to be then I think the shock would be a bit too much for you know the industry and essentially what that would mean is that you know they might face scrutiny from regulators and AI safety teams because of course as you know with every iteration in the models uh there's a significant uh jump in ability and also emerging capability so um I'm going to just play this bit again because he says even with all of its current limitations and I mean think about it like this right if you're a company of an AI company like you're the CEO all right you're not going to be out there publicly saying that you know even with all of its current limitations but gb4 does have limitations and you have to remember this is a model that has been out for quite some time so I think the fact that he talks about the fact that you know it's far more useful than what we thought and this is why I say okay it's pretty different because he said okay that you know now that we know and we think about launching the next much better models we come with a different perspective so that is why I say you know it's going to be more fascinating and even in the recent video we did talk about the fact that Sam mman um and you know when he talked to Bill Gates was essentially saying that gbd 5 might be a more personal AI system now if you don't believe that is the route that they're taking you may have missed the secret update in which you know they did a memory update now if you don't know what that is I'm going to show you guys a quick screenshot so essentially if you've used chat gbt for quite some time you may or may not have seen this screenshot right here now this screenshot is not a leak it's not a false it's not fabricated this is 100% real because even Greg Brockman has tweeted about this the open employees have tweeted numerously about this and essentially what we have here is the memory update so essentially it says your gbt can now learn from your chat to keep the conversation going it's going to carry what it learns between chats so even if you start a new chat it's going to know what happens before it's going to also improve over time which means it's going to remember details remember preferences on how you like to be spoken to on certain things and of course you can reset your gpt's memory and the custom instructions so you know it just goes to show that Sam Alman um and open AI in the future what they're planning is essentially more personal based AI systems because I do think that a personal based AI system is definitely going to be more useful and definitely get more engagement from the user because I think when people have seen how effective the rabbit demo was for having an AI agent that is personally customed to you I think people have realized that you know especially with character AI as well how crazy that platform is taking off it's slowly catching up to chat gbt's users and all it is a website where people can talk to AIS I think the level of a personal AI you know and I also think that Siri and Alexa and Google's I don't know what it's actually called but Google's home assistant have actually lost the mark in terms of not being able to catch up to what is state-of-the-art right now so I think this memory update for chat gbt and as Sam Alman saying that you know in future models they're going to be focusing on personalization um and having an AI system that helps you out I think that is going to be really

Segment 2 (05:00 - 10:00)

where the key focus is now there was another clip in the interview that Sam Alman did actually talk about which was rather interesting so gpt2 couldn't do very much gpt3 could do more GPT 4 could do a lot GPT 5 will be able to do a lot more and the thing that or whatever we call it matters most is not that it can you know have this new modality or it can solve this new problem it is the generalized intelligence keeps increasing and we find new ways to put that into a product use it but that's that is the high order bit I think that dominates everything else in the importance is that the overall capability of the mod its overall intelligence its ability to do longer more complex problems more accurately more of them that is increasing across the board and that to me is one of the few things that make this totally different from any previous kind of technology so yeah you can see right here he actually spoke about gbt2 could do a lot gpt3 could do a decent amount gb4 could do uh a lot but gbt 5 will be able to do a lot more so you know uh publicly speaking on gb5 and saying it's going to be able to do a lot more means that I think you know especially since if you have looked at the recent benchmarks you'll know that you know these models I wouldn't say they're capping out at their capabilities because capping out the capability is 100% And the last time we checked I think the last mlu Benchmark was 87% which is very good and of course Google got 90% on Gemini Ultra I think the point samman is trying to make here is that these models are going to become more accurate less hallucinations more Ive at you know retrieving information from documents and long pieces of text you know all of this stuff is going to be increasing over and over as they move forward and I think with that generalized intelligence and being that smart I think it's really going to usher in a next level in AI um and like samman said you know it it's what they focused on and it's not about the new modality which could be photo it could be video but having a model that is really smart and having that general intelligence because if you remember what samman spoke about was the fact that they are trying to create of course artificial general intelligence and if you want to get to AGI you are going to have to of course have a system that does have different modalities but you will need to have a really sophisticated system that doesn't fail you know as much as these systems do and they don't fail that much but the instances where they do mean that they can't be used in certain industries and of course that is a hurdle that they need to get over so I think that really fascinating stuff from Sam Alman discussing capabilities of future models like gbt 5 or like he said whatever they might call it goes to show that we are going to be in for perhaps a very good shock in terms of the model's capabilities and I would presume that they are going to release gbd 4. 5 just after the gem release because I think once Google gets the spotlight open AI are going to quickly take it back and then Google's going to have another mountain to climb now samman did also talk about something that I thought was rather fascinating because I think people were going a bit too crazy because essentially what Sam Alman talks about in this clip and the interviewer just asked him you know what's going on with the military use and Sam went in this clip basically says that look okay I don't think it's wrong for the military to be able to you know reorganize some fouls using our you know llms if they want to um so I don't know why people were going crazy with that because as always sometimes people run away with a narrative and I think uh this clip clears it up but like I said it's rather interesting to see how he you know navigates question that allows open AI models to be used by militaries uh why did you make that change because what we tried to do is say here's what you can't use the models for rather than here's this group of people that can't use it um there were parts of the Department of Defense that had I think Super legitimate super good use cases for our models and a blanket saying you know the anyone whose address ends in do Mill can't use this I think it's bad uh I'm like very proud citizen of the US I'm a huge supporter of liberal democracy continuing to do well we don't want our models being used to like make kill decisions of course but there's a lot of other stuff that the military does that's quite important and you've got some great use cases that you talked about helping you know prevent suicide by veterans obviously if someone giving a speech at West Point wants to translate it into Swedish that's probably not a high-risk use then you have stuff at the other end you know developing a nuclear weapon clearly against your policies but there's so much in the mid there's so much that is not building a bomb it's not destroying property which is another one you have that's allowed that's not allowed rather but that could be really harmful I mean these AI engines are incredibly you know capable persuasion engines how do you draw the line of what a military can and can't do and do you think that's the right place one of the things that we believe very deeply is that society and this technology have to co-evolve um it we believe in iterative deployment a lot for the obvious reason which is that people need time to gradually update and think and figure out what the rules should be but there's another part

Segment 3 (10:00 - 15:00)

which is so that part right there the reason I stopped that clip is because like I said okay we don't want to essentially shock the entire you know industry or the entire population with a new crazy model that's able to you know do 100 million different things and I know that's a vast exaggeration but the point is I'm trying to get you all to understand that you know if open AI came out with gb5 tomorrow and it was an AI agent it had 90% on all benchmarks I think people would be you know freaking out about this rather than being astounded by just how good the model is because I think people would realize just how quickly we are accelerating to a point where humans are no longer needed and that wouldn't spark inspiration it would spark fear because the timeline would be too quick for humans to you know get their brains wrapped around the fact that you know in just a couple of years they might not be needed for any you know real tasks so I think what's H when they saying here is that you know human humans do need time to adapt you know you do need to be able to you know process the emotion and the feeling of you know societal change can't just happen overnight because that just isn't good for anyone and you know I'm pretty sure um they know this and they understand that look even last time when gbt 4 was released there was a huge letter that all of these you know company own signed you know Elon Musk uh Steve wnac all of these people you know did sign because they were like who okay we went from gbt 3. 5 to gbt 4 and that was a huge uh increase you know in such a short amount of time so they were like whoa whoa whoa we need to slow down we need to stop releasing anything other than gbt 4. 5 to gbt 5 so samman's basically saying that look okay we're going to make sure that um you know we need to be able to co co-evolve and of course it of deployment which essentially means that the next model is likely to be gbt 4. 5 and after that it could be gg5 but I think okay and don't quote me on this but I do think that they might you know limit some of the capabilities and I think maybe that's why they nerfed gbt 4's capabilities just because I think the adjustment period is definitely something that would be better for Humanity in the sense that people do need time to adjust because if we go from you know gbt 3. 5 to gbd4 and then people are like freaking out and then we go to gbt 5 and gbt 5 is just crazy like people are going to be like oh my God like do you know what I mean so I think you know uh iterative deployment is definitely something that they are leaning to and of course giving Society the ability to co-evolve with the software and make that transition is definitely a very good point but here is the rest of the clip because it is uh you know rather fascinating society and this technology have to co-evolve um it we believe in iterative deployment a lot for the obvious reason which is that people need time to gradually update and think and figure out what the rules should be but there's another part which is it's it that you can't separate the technology from the world you can't just even if you get everything magically right you can't build it in secret and then put it in the world all at once because the world is going to keep changing with each iteration which means on those middle cases I don't know what the right answer is yet nor does anyone because no one has really like thought through and seen how the institutions and the world and Society shift and reshape in response to this so there will be a lot of things that we'll have to start slowly on and iterate as we go and there will be a lot of middle cases but we do want to support the US government and other governments too and like I find it odd that everyone thinks this is a big gotcha question where they're like going to say wait are you saying you support the US government and my answer no it's actually the did get cut off there but essentially Sam alman's answer is yes of course he does support the US government because he's a US citizen so I think that is a complete given now does he support everything that the US government does and you know watches all their political moves I don't think anyone does so um let's just not get into politics we are staying on the topic of AI technology but like Sam Alman said that Society shift um you know they're going to start slow so like I said I think they might even drip feed us some of the capabilities because I think in some cases it is best to do that because like as Sam Elman said we do want the ability to get everyone up to speed as things do change because I don't think you know what with your one day you know gbt 4 and then the next day you know everything is automated and I don't think that's how quick things are going to go but I do think that in response to gbt 4 the way Society reacted um does show us okay and I think the fact that you know right now what he's saying about this and saying the fact that we do need iterative deployment does actually show us that they did make a giant leap because with qar and all the leaks and the fact that Sam mman said it was an unfortunate leak does show us that they are moving faster than they are letting on because think about like this right if you're letting if you're moving really fast okay and you know you don't want your company to be regulated into the ground you're definitely going to go in small steps because you want it to be a gradual transition because that's good for everyone but if you're moving slowly you could just release the next model and you're moving gradually anyway so I think this is obviously the chance here but this does show us that AI is moving rapidly and

Segment 4 (15:00 - 20:00)

that's of course just an inherent nature of the field and of course allows open AI models to be used by military then of course there was something here um and I think this just goes to show us on the back end like I mean in four to five years when this thing is built how crazy uh open ey is actually going to be in terms of its company size I think it's going to be a monolith because um if they're able to get this off the ground that would really change everything if it is as effective so take a look at this cuz this is where they talk about raising for other stuff but samman basically said that what I was doing was an open AI thing I think there was this misrepresentation not hard to guess by who but that I was like off in the Middle East raising money for this chip effort and that it was somehow like a Sam project it was like an open AI project that the board had decided was a clear strategic priority um and it it's not a separate thing so you know I'm not going to go try to like fight everything in the Press because you all will do your thing and I respect that um but that was like wildly misreported these are like these are critical to open AI efforts I don't invest much in personal startups anymore like I don't invest personally much in startups I do support the ones that I invested in um previous to this in general if there's like a startup that I supported and they're raising more money um I try to like continue to support but uh like I'm not I don't think of myself as an investor anymore that was our thought I mean that was that that is the whole reason we have the strategy of iterative deployment and I think I mean I think we've been right so far to do this see yeah um I think we all need to pay attention to this because essentially one thing that most people don't realize is that the current chips that we do have the current graphics cards and you know processes that we do use to you know run and train these models aren't designed for AI okay um and there are companies out there that are specifically designing stuff for AI and if they're able to get that stuff effectively I honestly for the life of me cannot remember the company that is working on it because there is so much AI news that I honestly look through but the company was able to say that they were able to get things going a 100 times faster based on you know current you know small more preliminary results um and if they're able like think about like this guys if they're long story short if they're able to get the training runs down from like a couple of months to down to hours um I mean that would be a pace that is unprecedented um and I mean that would have serious ramifications now I know that sounds like a dream and might not even be possible anytime in the near future but if we do manage to increase the like the speed of the processing um and just you know the specific AI stuff that is going to be uh really really incredible in terms of just the speed because we already know that the breakthroughs are happening in the research but you know the training runs are definitely something that do take a long time so for that would be a major breakthrough but of course like they said this is what he said Is An Open AI priority so it wouldn't be surprising if in the next couple of years we do get some breakthrough in chip design that allows us to you know do these training runs um much faster and of course allows us to iterate even quicker and of course we do have uh this statement so um around the internet and around the AI communities there has been the consensus that Sam mman recently has been downplaying AGI now if you know what AGI is you'll know that it's going to be the most transformative technology within the decade and like the Tweet says Sam signed a letter saying that AI extens Central Rich should be a global priority and he also said that the bad case is it's lights out for all of us now he ins he hints at AGI like it will be a nothing burger and it's curious how closely his words match whatever what's most convenient at the time so I think what Sam mman is doing here and of course this is just pure speculation we have no idea why he says what he says there could be a variety of reasons but I think closest to the model you don't want to state that you know um bad cases the lights out for all of us this is an existential risk but now he's basically saying that look AGI um you know because basically in the clip I'm not sure if I'm going to show it on here but he said that you know when we have ai the world will have a twoe freakout and then people will go back to their lives but before he basically said that you know this is uh going to be a really transformative technology so either way I think um you know AGI is going to impact anyone because if he if the company does succeed an actual AGI like they actually do it they get it done like an actual AGI that's good it can you know do a bunch of stuff it can do research y y it can do everything that you know the Deep Mind says you know like a competent AGI could do that's better than 99% of humans in tasks I think that will change the world fundamentally and I don't think we go back to a world without it so I'm not sure how people are going to just go back with their lives because their individual lives and sense of purpose is going to be changed so I'm not sure how that's going to work but I do find that fascinating and captivating that he actually did say that so um maybe it's just downplaying it I have no idea but um there was also a comment okay from another YouTuber that I do watch that has really riveting stuff but I would take a look at this because um it's rather important so essentially David Shapiro someone that creates uh some of the best AI content on the web

Segment 5 (20:00 - 21:00)

um essentially talked about how Sam Alman said AGI is coming soon but it won't change the world and that really just doesn't make sense because he says I don't know to me that's like saying that the invention of electricity or steam power or the printing press didn't change the world much and his personal readers this and either I either he's toning down the rhetoric since the open AI drama so a few months ago he was of course fired and then rehired and maybe the pr teams are saying you know just don't spout off and saying that a AI is going to you know uh do crazy stuff because to be fair there are many news articles out there that like run with any like any narrative that you know AI is going to destroy us so I mean it is wise to not go down that route at all and then someone you know the second point is that he's finally realizing the massive rapid indications of what they're doing and they have no idea how much is going to change the world and they are legitimately worried hence the firing and attempt to sync open AI by the former board and it's a combination of these and the other factors so do you think the AI will change the world much and then there was a poll and most people do think AGI will change the world much so um I mean by definition it's going to like by the definition if you've seen deep Minds um you know classification for what they view AGI as so I would say that AGI of course if it is real if it is effective if it is able to be done most certainly change of world but I think Sam mman is right to say what he says because um companies will honestly run with it and we don't need another PR disaster slop ey board drama so will be interesting to see if he does change his tone in the future and how you know certain things do get translated but um I'm really excited for GPT 4. 5 or GPT 5 which other one is released and let me know what your thoughts and comments are because we are definitely moving very fast into an uncertain future and it's important to pay attention

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник