OpenAI Reveals Their Plans For 2025 In An Exclusive Interview...
16:13

OpenAI Reveals Their Plans For 2025 In An Exclusive Interview...

TheAIGRID 03.11.2024 29 196 просмотров 619 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Prepare for AGI with me - https://www.skool.com/postagiprepardness 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ Links From Todays Video: Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (12 сегментов)

Intro

so samman recently had an AMA on Reddit and there were 10 significant things that I think you should all know so let's not waste any time the first thing

AI achievable with current hardware

is titled wait what because this is a wait what moment you can see here that someone asks is artificial general intelligence achievable with known Hardware or will it take something entirely different and I think this is arguably one of the most important questions you could ask because when we look at AGI there's a lot of things that people say when it comes to what we're going to need in terms of Hardware requirements some people have said we're going to need biological Hardware Quantum Computing to achieve it but Sam mman clearly states that we believe it is achievable with current Hardware now it's important to note that with this statement he does say that we believe it is achievable that doesn't mean that it is 100% achievable but the implications for this are stuck because of course what it does mean is that they have a road map specifically to AGI with their current hardware and it's not like they're thinking okay we're going to need new hardware to make that a possibility now what I think this also does mean as well that it doesn't mean that we won't benefit from new hardware because I was thinking okay so AGI can be achieved with the current Hardware but it doesn't mean that we can't improve AGI to get even faster think about what we've done with Transformers how quick they are but of course we've improved the hardware to get more specialized AI chips and now we can get inference stats completely insane so AI being achievable with current Hardware I want to say I'm a little bit surprised but somehow I'm not that surprised I guess this is just assuming the right algorithms and models are in place and what this does imply is that you know the bottleneck with AGI it just isn't Hardware but it's rather the advancement in AI research algorithm optimization and perhaps data quality which means that of course open AI probably have seen some things and they're like you know what Hardware isn't the issue maybe it's just our algorithms and maybe we just need to scale this up fix a few issues here and things are going to get really crazy but of course once AGI is achieved with the current Hardware AI specialized Hardware is probably going to speed things up 10 to 20 times so the future is probably going to get pretty crazy a lot faster than we did initially think number two was of course the 01

01 vs 01 preview

versus the 01 preview I'm pretty sure all of you guys know by now what the o one model is this is the new paradigm of models that think before they talk and apparently someone asked the question is the full 01 really a noticeable improvement from 01 preview and the opening ivp of engineering said yes so for those of you who are currently using the 01 model right now I'm guessing that when we do get full access to 01 and crazily enough guys a little side story for you all 01 was released accidentally today I actually got to use it it's pretty cool um someone actually tweeted a link and you were actually able to use 01 with images it was pretty crazy but um apparently 01 is an insane noticeable improvement from 01 preview and if you have looked at the benchmarks that is a true statement because 01 I think it's about like 10 to 15% better across a range of different benchmarks so in those areas where you think ah this model isn't that good just trust me it is good and for those of you guys who are struggling with 01 preview 01 mini I would say just try your hardest to give that model as much context as possible um not too much context but like if you have a problem that you're reasoning through when I say context I mean like okay give it your relevant context that's going to help it solve the problem so for example if you've got a health issue it's probably good to have your age your lifestyle habits your ethnicity anything that may contribute to certain lifestyle factor or influences anything that could influence that is always important to include and you'll see that there's really nice reasoning jumps now coming in at number three they also talk about the future of scaling so someone says how will 01 influence scaling llms will you continue scaling llms as per scaling laws or will inference compute time scaling meaning that smaller models with faster and longer inference will be the main focus and this is where they say that we're not just moving to that Paradigm it's the both Paradigm so Kevin will the open AI CPO says it's not either it's both better base models plus more strawberry scaling inference time compute so when you have both models because when you think about it right guys like if we say that humans are AGI in a sense that like you know we are the base standard because it's human level reasoning we need to think about that for a second and one thing that I keep telling you guys is that humans don't just have one chain of reasoning we have system one thinking which is you know the quick kind of thinking which is immediate like where I say uh what's your favorite food and you say pizza and then I say okay what's the fastest way to get to the healthiest food spot in town and then we have to plan a journey system one thinking is slow I mean actually the quick one and then system two thinking is where you have to figure it out slowly and when you think about it humans have both like we inherently have both and we use different ones for different scenarios so I think that if we ever do get to AGI I think that we're going to have like a miniature system that just thinks is this question long or is this question short just like humans have because we wouldn't route that question to our short brain we would route it to our long deliberate thinking and that's exactly what these AI systems are going to do in the future and that actually does make sense because we can't just expect it to be you know inference time reasoning and we can't just expect it to be instant zero shot responses then of

Hallucinations

course coming in at number four we have the fact that hallucinations do persist so someone said that thanks for the great work love you and so on are hallucinations going to be a permanent feature why is it that even 01 preview When approaching the end of thought hallucinates more and how will you handle old data even 2 years old that is now no longer true continuously train models or some sort of garbage collection it's a big issue in the truthfulness aspect and I'm not going to lie this was one thing that I genuinely didn't think about the fact that like these models if you've ever used certain models from the past one person said and this was like a really interesting thing that I saw was that like these models are essentially time capsules like they're trained on from this date to this date like they've got knowledge cut off until like you know April or May of this month or June and of course with after that like they just don't have any more information now of course you've got like search and stuff like that but it is a real issue to have like old information that is no longer true where you're trying to do reasoning based on a paradigm that doesn't exist and remember we're in the AI era now which means that a lot of things are changing quickly now openingi SVP of research responds saying we're putting a lot of focus on decreasing hallucinations but it's fundamentally a hard problem our models learn from Human written text and humans sometimes confidently declare things they aren't sure about our models are improving at citing which grounds their answers in trusted sources and we believe that reinforcement learning will help with HS as well when we can programmatically check whether models hallucinate we can reward it for not doing so now I think this is good news for those of you who doesn't want AI to take your jobs but bad news for those of you who want chat GPT for certain applications one of the reasons that chat GPT gp4 whatever you want to call it generative AI isn't suitable in some applications is because if you make a mistake in certain Fields The Fallout from that is ridiculous like what you are placing on the line is too far like if a generative AI model makes a mistake when it comes to healthcare someone could die and we're not going to put lives at stake and I know of course you do look at the research and you say that okay these models are you know a lot better than certain doctors that you know summarizing notes and diagnosing this and that because they're able to look at all the data but I think because of how certain industries are and because of the kind of rules and regulations that we do have it's quite unlikely that we will get a situation where until these models are really reliable and have a ridiculous level of reliability it's unlikely that they're going to have this large widescale commercial use case now it could just be a situation where someone develops a software and then boom we're off to the racist that's going to be a huge moment because then there's going to be you know wi scale adoption but with hallucination rate of like 5 to 3% imagine like every third plane or fifth plane crashes that just wouldn't be something that you know we travel in we just honestly wouldn't or like imagine every third or fifth engine exploded that's just not something we're going to put in our cars so I think this is probably one of the biggest problems and if open a ey are saying that fundamentally this is a hard problem it's probably a hard problem now of

Agents

course we do have the next breakthrough now you can see right here that it says what's the next breakthrough in GPT line of products and what's the expected timeline he says we'll have and this is Sam mman by the way he says we'll have better and better models but I think that the thing will feel like the next giant breakthrough will be agents so if you want to figure out where to go next and where to focus your attention it seems like open AI are working on some incredible agents we've already seen a bunch of different agents with Microsoft and Google's Project Astra but I think that the next breakthrough being agents is going to be really interesting because that is something that does require reliability over a longer period of time and it's going to be really interesting to see what openi manages to do usually they are at the frontier of this and if they at the frontier they usually come out with something pretty crazy and we have companies spending two years or so trying to catch up to what they are doing so that's is going to be really interesting because they said agents are the next giant breakthrough

Surviving

coming in at number six we do have surviving in the age of AI this is good for those of you who are worried about post AGI economics he said regarding in the future if you were 15 today what skills or past would you focus on to succeed in the future and the co-host says being adaptable and learning to learn is probably the most important thing and I would agree because in a never-changing world you have to be adaptable you can't be rooted in one belief or the other you have to adapt to the environment you know as they say adapt or die and that is something that I truly believe yes you may have existing skills but learning how to learn quickly and efficiently without wasting time and translating knowledge into a skill is going to be something that's really important as many different Industries are sprung up and many older ones disappear now one of the

What Did Ilia See

biggest things that most people were wondering was of course what elasa saw because this was a question that was trending on Twitter for quite some time what did Ilia see that was literally all I saw in my timeline when I was scrolling on Twitter for I think it was like the first three weeks after Reuters broke the news that there was some Advanced AI that could end the world and um samman responded to this saying he saw the Transcendent future and IIA is an incredible Visionary and sees the future more clearly than almost anyone else his early ideas excitement and vision were critical to so much of what we have done for example he was one of the key initial explorers and champions for some of the early ideas that eventually became 01 so it's clear that ilas satava saw something that was so far ahead that you know I'm guessing that he's clearly going to start his own company now and of course I do wonder if he can get to Super intelligence before open a ey it will be interesting to see if they can do that I think running your own company is very hard but considering the fact that they now have complete Focus just to do that I think they definitely have a shot they don't have to wait on any products you know abide by certain deadlines all of their computer is going to be going to Super intelligence which apparently skips past AGI surprisingly but yeah it's going to be a really interesting time because if they come out and like hey you know we don't we did the super intelligence thing um it's going to be like I think the world is going to change on that day so that's what everyone's racing towards which is pretty crazy now one of the

Advanced Voice Mode Vision

things that most people got coming at number eight is the fact that most people forgot about this and I still haven't forgot about this cuz I think this is really cool but basically okay there is advanced voice mode Vision okay someone said any time loan any timeline on when we'll get advanced voice mode Vision why is gbt 5 taking so long what about the full 01 and some Altman says we're prioritizing shipping 01 and its successes so all of the models have gotten quite complex and we can't ship as many things in parallel as we'd like to we have a lot of limitations and hard decisions about we basically about their allocated compute towards many great ideas and we don't have a date for advanced voice mode with a vision yet basically the what they're saying is that like look advanced voice mode is a cool feature really amazing but there's not really a return on investment considering the fact that most people might not use it as much as we initially thought so we're going to focus on shipping 01 because 01 is our Frontier Model it's much smarter it can do a lot more and I'm guessing some of their big Enterprise clients are going to be using those a lot more considering the fact that 02 03 0405 that's probably going to be bordering on the line of AGI maybe edging on ASI and I know that sounds like a crazy thing but if we look at gpt1 to gbt before where it is now definitely pretty crazy and inference scaling sounds like there's a lot to go so I will be intrigued to see what goes on there and within voice advanced voice mode Vision if you don't remember what

Be My Eyes

this is basically there was a short demo of uh you know be my eyes is basically an application where if you are visually impaired you can use this and take a picture of something and then you know other people are basically your eyes and they tell you exactly what it is you're looking at now with advanced voice mode instead of you know using random people around the internet that have Vision to see for you what you can actually get is you can get advanced voice mode which is basically chat gbt advanced voice mode but with Advanced Vision mode so it's like a live stream you're basically on FaceTime with an AI and the AI is just watching what you're watching it's pretty cool it's actually really Innovative and I'm actually glad that these are the kind of applications that you know a lot of people are going to get out of AI like a lot of people say AI is boring it's this it's that but this person was able to book a taxi able to just hold it up and advanced Mo was able to say hey hold your hand out now this taxi is coming now um this person was able to get a taxi so a lot of people with disabilities are going to have um you know a much easier life um all things considered once this AI technology gets you know completely rolled out now for

Next Update

the next update they said when will you guys give us an update for you know a new text image model darly 3 is kind of outdated Sam Alman said that look the next update is going to be worth the wait but we don't have a release plan yet which means it's not at the top of the agenda as I said before is 01 so I'm guessing that one of the things that we can expect is of course 01 and remember they are focusing on agents right now so that is probably why we haven't seen much from There Yet now also they did state that

Longer Context Window

they're going to be working on a longer context window someone said hello I would like to ask when the token context field for gbt 4 gets increased in my opinion 32k especially for longer coding tasks or writing tasks is way too small compared to other AI models out there agree we're working on it and I agree 32,000 context window it just isn't enough a lot of times I'm trying to write some long things and it just really doesn't help so this is going to be something that's really cool now this last one here coming in at number 11 is insane because I forget about this from time to time and then I often get reminded and I'm like ah I can't believe it's still that out yet so where will we get information about gbt 40 and image and 3D models Generation Um they said soon and he actually showed a screenshot of we basically get a look at this HTML

HTML RealTime Editor

real time editor so it seems like this is going to be one of the features they're shipping out from GPT 405 first and if you're not familiar with what I'm talking about basically gbt 40 has a bunch of advanced features that just weren't included with the release as you know well you might not know but gbt 40 is actually an omn model basically meaning that it is audio basically anything in anything out and that means audio image video 3D models absolutely anything in anything out and with that of course some people are wondering when of the 3D models coming when of the you know manipulations coming but they actually showed us this so it seems that we're going to get a realtime HTML uh you know renderer where you can just simply enter anything and we're going to be able to see that in real time and manipulate that so that would be cool but I'm not sure when that is going to be released now if there's anything that you guys wanted to discuss don't leave a comment down below and I'll see you guys

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник