OpenAI's Finally Give GPT-5 A Body (Figure 02 Breakthrough)
12:52

OpenAI's Finally Give GPT-5 A Body (Figure 02 Breakthrough)

TheAIGRID 07.08.2024 50 000 просмотров 965 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Prepare for AGI with me - https://www.skool.com/postagiprepardness 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ Links From Todays Video: Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (3 сегментов)

Segment 1 (00:00 - 05:00)

so open ai's AGI embodied humanoid robot is finally here the platform that will enable GPT 5 to speak walk and feel like a human is finally here this platform this incredible humanoid robot demonstration has finally been revealed the second iteration of figures humanoid robot series is really going to surprise you and after you guys watch this short demo I'm going to explain all of the things and key details that you really need to know because the next era of Robotics and AI are going to be absolutely Le stunning so that was just the incredible demo that you just saw there and I have to say first of all hats off to the figure team for such a remarkable feat of engineering currently if you don't know this is the world's most advanced humanoid robot and they've partnered with a variety of providers to ensure that this is the case and it uses Cutting Edge technology to ensure that this robot is De of the art so one of the things that they spoke about was the speech to speech reasoning some other robots use many different other methods but one of the things that you know figure 2 decided to use was the speech to text to reason and what you're looking at currently is the onboard model that allows this model to select decisions so you can see right here that if someone says can I have something to eat this is where opening ice model which could be any kind of model we're actually not sure and one thing that we haven't seen is clearly which model figure 2 uses the CEO hasn't actually stated which model it is but I do believe that either it's a small language model that's you know specifically tuned for robotics or it's probably just GPT 40 considering that thing would have really low latency so whichever model it's going to be I'm guessing one that they could probably swap out so essentially what we can also see here is that from this speech detex we can see that this model then of course selects the behavior there may be various different policies that this you know robot could pick so of course what we have here is the neural network policies that we can select you can see that then of course this goes down to the 200 HZ actions which then uses the whole body controller and the 1 khz joint Torx to of course pick up the apple and of course finally we then get to this right here which is where we get to sure things hereit and then you can see there that after we select the policy then of course we use the body to control it then of course we then comes all the way back and it says sure things here's an apple and if you do remember that was something that happened in the previous demo that we did get to see so all in all this entire system that we have here is a pretty rapid pretty rather effective system that allows the figure robot to embody Ai and of course in the future when AGI does get here I'm sure that it's going to embody AGI now one of the reasons that I do believe that this you know announcement SL advancement is a lot bigger than people believe is because number one if we actually take a look at this robot just for a second this robot was designed you know in less than 18 months which is a remarkable

Segment 2 (05:00 - 10:00)

feat of engineering considering how much time and development and research and development goes into making these products and of course these robots like Rob robotics is so hard that opening ey initially left this division because they were like look this is just way too hard we need to focus on software and it's not that open AI aren't smart enough to do it just takes a lot of time to be able to get you know a kind of robot that looks this good and can perform this well you know into an actual working thing like robotics is extraordinarily hard and I can't um you know stress that enough so it's important to understand that this wasn't the work of you know 10 years of research and design this company was like founded in 2022 like late 2022 and already they're on their set second iteration which is already the world's most powerful robot so you can start to see like you know if we look at the kind of trajectory and the amount of funding that's going into this product how much this is going to evolve if we're already at the second interation and it's already so good and I mean you know breakthroughs are going to continue happening there's going to be more and more developments around the space and all of those you know breakthroughs are going to continue like some kind of flywheel now there's also something that I really think is important here which is why I do believe that things are about to get real crazy is because we can talk about this data flywheel so basically what we have here is a situation where we have this robot Fleet so for example the robot Fleet is of course all the figure robots so you know all the robots that are walking around at the end of the video you saw them walking around the factory doing a repeatable autonomous task and it's going to be able to do that for hours on end and remember it's also able to self-correct itself so what we have here is a situation where with this data that you know you're going to be getting from this robot Fleet you're essentially going to be getting terabytes of data per day so you can see right here that they've you know highlighted it terabytes of data per day now why is this important is because terabytes of data is important because robots like figure two these robots need a lot of data in order to be very effective two of what like one of the two biggest um problems with human OD robotics is number one the cost of the robots they're not cheap at all um they cost like the price of a superar and number two is that there's just not enough data to train these robots effectively so there are two main reasons and one of those issues is if we manage to get terabytes of data per day what we can then do is we can then use that data which is collected on board via the vision cameras which is you know recording all of the actions you know however that data is collected it could be through you know motion or you know whatever successful attempt so there's a million ways to you know collect the data I'm not sure how fig is doing it but however this data is collected um it then of course is then put into the training data and then of course then it's updated to the latest neural network you know it's fed into and of course it's then back into this robot Fleet so this is where we have a situation where every iteration of like you know we keep getting better and better there's going to be this you know progressively increased level of sophistication from these robots which is why I say that right now you know things are about to get a little bit intense because these robots are rather effective of what they're doing already so I mean once you've you know scaled up these um humanoid robots like you can think about how much data they're going to be able to collect and how effective these robots are going to get over the long term so that's going to be super interesting to see how these robots get there now there was some key details that you know were left out of this video but the founder did actually talk about on Twitter one of the things that you know he spoke about was the fact that you know figured um two of course has this onboard Vision language model you know and this enables semantic grounding and fast Common Sense visual reasoning from the robot cameras and I think right now even like the current Vision language models that we do have which enables you know speech to speech reasoning they're not that good like sometimes they do mess up quite a bit um there was even I need to show you guys something there was even something that Andre I think it was Andre karpathy spoke about on Twitter but essentially it was just like you know these robots excel in many certain cases but in other cases they just don't Excel that well but yeah I can't find it but the point he was saying is that there are certain uh Niche cases where robots don't do well and vision is an area that you know robots are okay at but they still need to do you know get a lot better at finding out what's in an image and then of course grounding that because whilst humans might think that their vision is not good it's remarkably more effective than a robot's vision and this is going to be something that you know needs continuous development now what's also interesting here is that they also spoke about the battery and the battery is a 2. 2 kwh hours battery pack in the robot torso and it delivers 50% more energy than figure one to maximize robot runtime and they're hoping that you know this is going to be able to achieve 20 hours of useful work per day so that's incredible when you think about it I mean imagine a robot working for 20 hours per day I mean humans cannot work for that long and I'm not sure how long it's going to

Segment 3 (10:00 - 12:00)

take to charge but considering the fact that some of these platforms you're going to be able to just you know swap out the batteries I mean it's going to be absolutely incredible to see these robots working for you know 20 hours per day that's just uh remarkable stuff there now one of the coolest things they actually spoke about as well was of course the cameras being able to understand and perceive the world through their AI driven system and there six onboard RGB cameras located in the head torso and back and I mean you know when you actually start to look at the future you can understand that these robots are going to have incredible Vision they're going to be you know not just in their you know eyes like how humans have them but they're going to be in the Torso in the back they're going to be able to see and have a sense of you know spatial awareness that humans might not have because you're able to see from all directions and a surprising thing that they managed to do was they managed to get their hands to be surprisingly better than the Tesla bot so this is the you know fourth generation hands this is 16° of Freedom which is absolutely incredible and I'm going to show you guys why this is so incredible um as someone on Twitter did compare this to the Tesla bot we can see here the comparison from um this bot now this video I do have to admit this one that was posted on Twitter it wasn't posted in good taste as in someone posted this as a dig at figure 2 because they've got the dates here and they basically saying look at how long it took them to get this you know of course you can see here that these hands are you know pretty much you know both very effective but they're basically saying that look haha you know um Tesla managed to do it a few months earlier you know six to seven months earlier whereas figure two are just doing this now however I would say that you know this company I mean it's like at least they got competition now you know we as a consumer are going to be benefiting from this because both companies are now incentivized to make their mots better and of course cheaper so I mean either way there's a lot of development going on here and I think bigger two is definitely putting pressure on Tesla and Elon Musk to you know like actually release something that's rather competitive because it was even Elon Musk that actually tweeted hey um look I'm actually going to make sure that I you know really go hard on the Tesla Optimus because you know he actually tweeted bring it on to the CEO and founder of this company so there is a bit of a rivalry going on there and we know that Elon Musk is not someone to be underestimated overall I think this robot is going to be remarkable considering the fact that you know we've got Frontier models coming soon that are going to have advanced reasoning we of course going to have some breakthroughs in Vision which are going to be absolutely incredible and hopefully we do manage to get that data flywheel going of course right now there's still issues with nvidia's you know simulation not saying that nvidia's bad but I'm just saying that you know if we're able to you know solve the data issue you know these robots are going to become super effective very quickly so I can't wait for those things to happen and this is kind of like an early look at this you know futuristic world that we're going to be living in where we have robots walking around doing you know tasks in factories and being able to truly expand the economy let me know your thoughts about this robot what do you think about this stuff and I'll see you guys in the next video

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник