# Nvidia Just Changed Robotics Forever.... ( Nvidia GR00T N1,Nvidia Newton)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=t5pwwCOyim8
- **Дата:** 21.03.2025
- **Длительность:** 30:32
- **Просмотры:** 14,258

## Описание

Nvidia Just Changed Robotics Forever.... ( Nvidia GR00T N1,Nvidia Newton)

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

00:00 Robotics Future Explained
01:06 Massive Robot Growth
01:56 Technology Driving Robotics
02:52 Robots Replace Labor
04:09 Robots Continuous Learning
05:28 Omniverse Virtual Training
08:50 Cosmos Infinite Environments
11:15 Real-world Robot Testing
12:35 Disney Expressive Robot
13:51 Newton Physics Engine
14:46 Newton Speeds Learning
17:20 Newton Demo Shown
18:21 Nvidia Groot N1
24:05 Neo Realistic Robot
28:40 Boston Dynamics Advances

Links From Todays Video:


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=t5pwwCOyim8) Robotics Future Explained

So, Jen Sen Huang just revealed why the future of robotics is going to be absolutely incredible. At the recent keynote at Nvidia's GTC event, he actually discussed how Nvidia are changing the future of robotics and why it's going to be more impactful than you do think. In this video, I'm going to do a deep dive on why robotics is going to absolutely change everything. And with that being said, make sure you guys stick around so you understand every future development that you may miss in terms of robotics. So with that being said, let's do a deep dive on exactly what Nvidia have said. Time has come for robots. Uh robots have the benefit of being able to interact with the physical world and do things that otherwise digital information cannot. Uh we know very clearly that the world is has severe shortage of human labors, human workers. By the end of this decade, the world is going to be at least 50 million workers short. We'd be more than delighted to pay them each $50,000 to come to work. We're probably going to have to pay robots $50,000 a year to come to work. And so this is

### [1:06](https://www.youtube.com/watch?v=t5pwwCOyim8&t=66s) Massive Robot Growth

going to be a very, very large industry. There are all kinds of robotic systems. Your infrastructure will be robotic. billions of cameras and warehouses and factories, 10 20 million factories around the world. Every car is already a robot as I mentioned earlier. And then now we're building general robots. And so right there you can see that Jensen Huang said that physical AI and robotics is poised to become the largest sector for several compelling reasons. If you haven't been paying attention to this data, then you're probably going to be quite surprised. When we actually look at the market projections, the numbers are staggering. The global robotics market is expected to grow from about $25 billion today to between 160 billion and $260 billion by 2030. Cityroup analysts suggest that there will likely be 1. 3 billion AI robots by 2035 and a massive 4 billion by 2050. And even Elon

### [1:56](https://www.youtube.com/watch?v=t5pwwCOyim8&t=116s) Technology Driving Robotics

Musk has predicted that the market for humanoid robots alone could exceed 1 billion units per year, eventually leading to more robots than humans. Now, the growth is being driven by three primary factors. First, technological advancements, particularly in AI, have dramatically changed what robots can do. We're seeing breakthroughs in machine intelligence, connectivity, and control systems that enable robots to handle unpredictable situations autonomously. Now, these robots aren't just digital entities. They're physical systems that can truly interact with and manipulate the real world. Now, secondly, there is the compelling economic case. When we look at, you know, western countries with aging demographics and more restrictive immigration policies, robots can provide solutions to global labor shortages. And as wages rise, even in traditionally low-wage countries, such as, you know, factory workers in China, their wages have doubled since 2007, the

### [2:52](https://www.youtube.com/watch?v=t5pwwCOyim8&t=172s) Robots Replace Labor

financial case for actually replacing humans with robots actually becomes stronger. Labor actually accounts for over 50% of the global GDP. So the market there is actually bigger than you probably thought. Now third, when we also look at the applications, it's actually quite diverse. In manufacturing, intelligent robots with advanced vision systems can streamline production and improve quality control. In healthcare systems can perform complex surgeries with unprecedented precision. In transportation, autonomous vehicles are transforming how we people move goods. Even in exploration, in search and rescue, robots can operate in hazardous environments where humans cannot. And what's actually particularly interesting, as I've done some research, is how professional service robots are expected to dominate the sector. Their sales may more than double those of conventional and logistical robots, potentially hitting up to 170 billion by 2030. And this shift reflects how robots are moving beyond traditional applications into a ridiculous level of more diverse roles. And of course, the integration of AI into physical systems, which is what Nvidia have been calling physical AI, is pretty much the next frontier leap from the digital realm into the physical world. It's not really about robots performing the repetitive tasks like they used to. It's

### [4:09](https://www.youtube.com/watch?v=t5pwwCOyim8&t=249s) Robots Continuous Learning

actually about them becoming something else entirely. Some might call them becoming humans, but it's about robots that can truly perceive, learn, and react to their environments. And as these technologies mature, they promise to reshape industries, create new markets, and drive significant economic growth. So when we think about all of those things, when Jen Huang says that this may be the largest sector, it isn't just a hyper AI statement. So let's actually take a look at how they're actually advancing these robots with Cosmos. Physical AI and robotics are moving so fast. Everybody pay attention to this space. This could very well likely be the largest industry of all. At its core, we have the same challenges. As I mentioned before, there are three that we focus on. They are rather systematic. One, how do you solve the data problem? How, where do you create the data necessary to train the AI? Two, what's the model architecture? And then three, what's the scaling loss? How can we scale either the data, the compute or both so that we can make AI smarter and smarter? How do we scale? And those two those fundamental problems exist in robotics as well. In

### [5:28](https://www.youtube.com/watch?v=t5pwwCOyim8&t=328s) Omniverse Virtual Training

robotics, we created a system called Omniverse. It's our operating system for physical AIs. You've heard me talk about Omniverse for a long time. We added two technologies to it. Today, I'm going to show you two things. One of them is so that we could scale AI with generative capabilities. A generative model that understand the physical world. We call it cosmos. Using omniverse to condition cosmos and using cosmos to generate an infinite number of environments allows us to create data that is grounded. Controlled by us. and yet be systematically infinite at the same time. Okay, so you see Omniverse, we use candy colors to give you an example of us controlling the robot in the scenario perfectly and yet O Cosmos can create all these virtual environments. So this is where Jensen Huang was talking about how robotics and physical AI are advancing quickly. Now I've already spoken about the labor market and how that is completely going to change the world. But we actually need to understand how these robots work behind the scenes. So one of the biggest problems that you have with robotics is of course the data problem. So the problem is that robots need to see many examples to learn how to do something like cleaning a room, driving a car. This is essentially what you'll call data. But the problem is that gathering this data in the real world is quite slow, hard, and expensive. Imagine trying to teach a robot to recognize every object on Earth by showing it every object one at a time. That would just take forever. And so the question that scientists are facing now and all of these engineers is that how do we quickly create all of the data that the robots need to learn? So essentially there's also another problem as well and this is the model architecture. So once let's say you've got a decent amount of data, they need a good brain for the robot, which is what you'd call the model. So the model is basically the robot's brain structure and it decides how well the robot is going to learn. And you know, Jenang Wong pointed out that scientists have to find out the best way to build these robot brains and this involves ensuring that you build the right software, the right computer program that can learn from all the data that you know we gather. Then of course he talks about these scaling laws and this is essentially you know where you just means doing something bigger or faster and in this instance it just means that making the robot's brain smarter by using either more data or more powerful computers or an efficient way to process that data. So scientists want to know exactly how much data or computer power is needed to make a robot smarter. So you basically are asking this kind of thing. Do I need to read 10 books or 100 books to become an expert on dinosaurs? And Jensen Huang says we must understand how robots can become smarter by scaling up their learning. So we have all of these challenges in robotics. And to solve these challenges, Nvidia built something called the Omniverse. Now the Omniverse is basically like a giant video game or you guys could call it a digital playground which is where robots are going to be learning new things. Now, inside Omniverse, robots practice skills inside a virtual environment instead of a real world. Of course, it's going to be faster, cheaper, and safer for robots to learn here. Now, they've added a new tool called Cosmos. And Cosmos is

### [8:50](https://www.youtube.com/watch?v=t5pwwCOyim8&t=530s) Cosmos Infinite Environments

basically like a powerful imagination engine inside of Omniverse. This is the thing that is going to really help solve a lot of the issues that we're facing. This creates infinite numbers of virtual worlds automatically. And these worlds are realistic enough that when robots learn inside of them, they can easily apply what they learn to the real world. And because Cosmos can generate endless practice situations, the robots can get much smarter and faster because they basically have limitless examples to learn from. So this is of course why I said that the real world is going to change very rapidly and physical AI/rootics is at the center of that and I think Nvidia are going to be playing a key role. So with that being said, let's actually take a look at how Cosmos works and the key details that Nvidia are doing behind the scenes. Everything that moves will be autonomous. Physical AI will embody robots of every kind in every industry. Three computers built by NVIDIA enable a continuous loop of robot AI simulation, training, testing, and real world experience. Training robots requires huge volumes of data. Internet scale data provides common sense and reasoning. But robots need action and control data which is expensive to capture. With blueprints built on NVIDIA Omniverse and Cosmos, developers can generate massive amounts of diverse synthetic data for training robot policies. First, in Omniverse, developers aggregate real world sensor or demonstration data according to their different domains, robots, and tasks. Then use Omniverse to condition Cosmos, multiplying the original captures into large volumes of photoreal diverse data. Developers use Isaac Lab to post-train the robot policies with the augmented data set and let the robots learn new skills by cloning behaviors through imitation learning or through trial and error with reinforcement learning AI feedback. Practicing in a lab is different than the real world. New policies need to be field tested. Developers use Omniverse for software and hardware in the loop

### [11:15](https://www.youtube.com/watch?v=t5pwwCOyim8&t=675s) Real-world Robot Testing

testing, simulating the policies in a digital twin with realworld environmental dynamics with domain randomization, physics feedback, and highfidelity sensor simulation. Realworld operations require multiple robots to work together. Mega, an Omniverse blueprint, lets developers test fleets of post-trained policies at scale. Here, Foxcon tests heterogeneous robots in a virtual NVIDIA Blackwell production facility. As the robot brains execute their missions, they perceive the results of their actions through sensor simulation, then plan their next action. Mega lets developers test many robot policies, enabling the robots to work as a system, whether for spatial reasoning, navigation, mobility, or dexterity. Now, I want to actually talk to you guys about something that stole the show, and this was, of course, the robot that you've probably seen that everyone has been posting on social media. So, this is a robot from Disney. And Disney essentially developed a small bipedal robot that resembled characters from Warie and BD1 from Star Wars which has been praised for its lifelike and expressive movements. Now, this adorable

### [12:35](https://www.youtube.com/watch?v=t5pwwCOyim8&t=755s) Disney Expressive Robot

robot that was actually availed at 2023 at the International Conference on Intelligence Robots and Systems features a toddler-sized body with an expressive motion control and can perform a variety of different movements with personality including strutting, prancing, sneaking, trotting, and meandering. And now, this robot, interestingly enough, it was actually constructed using 3D printed parts and off-the-shelf actuators, making it modular and easy to modify. Now, what makes this robot particularly impressive is its ability to maintain its balance on uneven terrain while preserving its character and personality in movement. Now, Disney's robotics team led by Moritz Backer from the Disney research in Zurich used a bunch of innative methods to actually get this done. They used procedural animation, modular hardware, and reinforcement learning. And they actually did this in a way that allowed them to go from years to just months. Now, what's crazy about all of this is that they decided to take this robot up a notch and collaborate with Nvidia. And what they've decided to do, so it's Nvidia, it's Google DeepMind, and it's Disney Research. And these guys are all joining force on something called Newton. And this is going to be a groundbreaking open-source physics engine that is set to revolutionize robotics. Now, it's not

### [13:51](https://www.youtube.com/watch?v=t5pwwCOyim8&t=831s) Newton Physics Engine

just any standard simulation tool. This one is actually built on Nvidia's warp acceleration framework designed to bridge that frustrating simulation to real gap that's been holding robotics back for years. And at its core, Newton leverages the fundamental laws of physics. Conservation of mass and momentum, rigid and soft body dynamics, contact and friction models, and realistic actuated simulation to create virtual environments where robots can learn and develop without the cost and risk of physical testing. Now, what makes Newton special is that it combines accurate physics modeling with GPU acceleration, allowing robots to learn complex tasks much faster than before. It's actually compatible with existing frameworks like Mujo Co, which just got a massive speed boost through Mujo Warp, achieving 70 to 100 times acceleration for certain tasks. And Nvidia's Isaac Lab actually allows developers flexibility in how they implement it.

### [14:46](https://www.youtube.com/watch?v=t5pwwCOyim8&t=886s) Newton Speeds Learning

Now, the really exciting part here is that Newton actually supports differentiable physics, meaning it can propagate gradients through simulations, which allows for optimization of system parameters and opens up entirely new methods for robot learning. Plus, it's built on OpenUSD, a common language for robotics and data that helps unify workflows across different platforms and tools. Now, Newton is also incredibly extensible, capable of handling rich multifysics simulations where robots interact with everything from food items to cloth to sand. And thanks to custom solvers that can be coupled together, Disney is already putting this technology to practical use for their expressive robotics character. Now, what I love about this most is that the technology is open- source, which means they're inviting the entire robotics community to build upon it, which could potentially accelerate innovation across the field and democratize access to advanced physics simulations that were only available to wellresourced labs and company. The second thing just as we were talking about earlier, one of the incredible scaling capabilities of language models today is reinforcement learning verifiable rewards. The question is what's the verifiable rewards in robotics and as we know very well is the laws of physics. Verifiable physics rewards and so we need an incredible physics engine. Well, most physics engines have been designed for a variety of reasons. It could be designed because we want to use it for large machineries or uh maybe we design it for uh virtual worlds, video games and such. But we need a physics engine that is designed for very fine grain rigid and soft bodies. designed for being able to train tactile feedback and fine motor skills and actuator controls. We needed to be GPU accelerated so that we these virtual worlds could live in super linear time, super real time and train these AI models incredibly fast. And we needed to be integrated harmoniously into a framework that is used by roboticists all over the world. Majoko and so today we're announcing something really special. It is a partnership of three companies DeepMind Disney Research and Nvidia and we call it Newton.

### [17:20](https://www.youtube.com/watch?v=t5pwwCOyim8&t=1040s) Newton Demo Shown

Let's take a look at Newton. How do you like your new physics engine? You like it, huh? Yeah, I bet. I know. Tackle feedback, rigid body, soft body simulation, super real time. Can you imagine just now what you were looking at is complete real time simulation? This is how we're going to train robots in the future. Uh just so you know, Blue has uh two computers, two Nvidia computers inside. Look how smart you are. Yes, you're smart. Okay. All right. Hey, Blue, listen. How about let's take them home. Let's finish this keynote. It's lunchtime. Are you ready? Let's finish it up. We have another announcement to You're good. Just stand right here. Stand

### [18:21](https://www.youtube.com/watch?v=t5pwwCOyim8&t=1101s) Nvidia Groot N1

right here. All right. Good. Right there. That's good. All right. Stand. Okay, we have another amazing news. I told you the So now if we're actually talking about open source, it might be worthwhile to mention Nvidia's recent announcement which is of course GRU N1. Now Groot N1 is an open-source foundation model for humanoid robotics that was announced at NVIDIA GC in 2025. Now what's crazy about this was that it represents the world's first foundation model specifically designed for generalized humanoid robot reasoning and skills. Now, Groot N1 evolved from Nvidia's earlier project Groot, expanding beyond industrial use cases to support humanoid robots in various form factors. And this model features a dual system architecture inspired by human cognitive processes consisting of two complimentary component system 2 slowing vision language model based on an NVIDIA eagle with vision model that perceives and reasons about the environment and instructions and plans appropriate actions and then a system one fastinking diffusion transformer that translates these plans into precise robotic movements and action. Now, this architecture enables humanoid robots to perform complex tasks autonomously, including grasping objects, moving them with one or both arms, transferring objects between arms, and executing multi-step operations that require sustained contextual understanding. And group N1 is a single AI model with one set of weights, meaning it doesn't rely on multiple models for different tasks, allowing for unified control across various manipulation behaviors. Now the model was trained on an expansive data set combining real world humanoid movement data, synthetic data generated using Nvidia's Isaac group blueprint and largecale video data from the internet. In fact, using components from the blueprint, Nvidia generated 780,000 synthetic data trajectories, which is equivalent to 6,500 hours or 9 continuous months of human demonstration data in just 11 hours. Think about that. and combining this synthetic data with real data improves group N1's performance by 40% compared to using only real data. Now the model is fully customizable and adaptable through post-training specific embodiment tasks and environments with Nvidia making the training data and evaluation scenarios available on platforms like hugging face and GitHub to enable further development. Now, Group N1 has demonstrated robust performance across various benchmarks, achieving an average success rate of 45% across simulation benchmarks compared to 33. 4 for diffusion policy and 26. 4% for BC Transformer. It's been designed to work with advanced humanoid robots like the Forier GR1 and the 1X Neo with early access granted to companies including Boston Dynamics. I think it's worthwhile you guys do actually take a look at this video because whilst yes I can explain all the technical details this video does a greater job at explaining exactly how this is going to be implemented. NVIDIA Isaac Groot is a platform of sim ready data simulation frameworks and synthetic data generation blueprints and pre-trained models. NVIDIA Isaac Groot N1 is an open generalist foundation model for humanoid robots. Groot N1 features a dual system architecture for thinking fast and slow inspired by principles of human cognitive processing. The slow thinking system lets the robot perceive and reason about its environment and instructions and plan the right actions to take. The fast thinking system translates the plan into precise and continuous robot actions. While internet scale training data provides common sense and reasoning, it doesn't teach robots specific actions or control. So we need better data and more of it. Human demonstration data is limited by the number of hours in a day. With Groot blueprints for synthetic data generation built on NVIDIA Omniverse and Cosmos, we can exponentially multiply a small number of real world data captures into a massive diverse training data set. Groot N1's generalization lets robots manipulate common objects with ease and execute multi-step sequences collaboratively. across many environments and even multiple embodiment. And with synthetic data generation and reinforcement learning, humanoid robot developers can post-train Groot N1 for their specific robot and task. The age of generalist robots is here driven by developers building on Nvidia Isaac Groot. So one of the things I do want to talk about is of course the other robots that are in the space. Now one of the robots that is essentially my favorite is the Neo robot. Now the Neo robot is a robot designed by One Botics. This is a company that is backed by OpenAI and many other serious investors. So

### [24:05](https://www.youtube.com/watch?v=t5pwwCOyim8&t=1445s) Neo Realistic Robot

this is a company that has managed to achieve quite a lot in the short time of space that they have been operating and their recent robot which they've developed looks super realistic. Now what's amazing about this robot as well is that we've seen that the recent demos at the Nvidia GTC shows us that the CGI isn't far off from what the robot is able to do in reality. So, a lot of people say that, oh, it's just a cool CGI demo. But you can actually see here the robot doing a variety of different things autonomously and doing them in a way that looks realistic and humanlike. And I think that is really important so you guys can understand exactly how far the space has gone. So, for those of you who are wondering, you know, is this all just demos and is it all just fancy nonsense? when you're at these events. The video wasn't taken by me, but you can truly see how it's going to be in the real world when these robots are operating at scale. Now, I do want to have this section of the video show you guys exactly what other people who are working in the robotics field are talking about when it comes to robotics because like I said before, you really do want to see what other people that are the CEOs of these companies are saying. So, take a look at the CEO and co-founder of 1X says about this robot. Did three things. We're harvesting energy, then we're turning that into products and services, and then we consume them. And we're so good at harvesting energy. And if you asked someone from like 1890 or something and you said like, "Oh, in 2024, you can buy like a kilowatt of power for 10 cents, they would never believe you. It's just unfathomable the amount of power that is. " Now, I think if you look back at this in like a few years, people will say the same about physical labor. M it's like yeah why can't you have anything you want because physical labor that that's like zero overhead there's no human effort involved you take energy you get you apply autonomy and you get products and services that's the way the world works but of course to us now that's not the way the world works at all um and neither was it for energy before we really mastered how to harvest energy and I'm not saying we're done yet on the energy front but we're pretty good at it And that means it's hard to define right what is the impact. I think we're going to go to a post scarcity type economy. We're not going to any longer be constrained by labor. And you can think about it a bit like if you could prompt engineer physical reality to do anything. Now of course some of you may have heard of the figure robot. They have actually been releasing tons and tons of different updates into the robotics ecosystem and it's been super intriguing to see where they're going with their robots development and recently they actually decided to cancel their partnership with OpenAI because they believe that they are better on their own. Now this is super interesting because it means that whatever innovations they have discovered is pretty clear that they believe it's going to give them a decisive advantage over the competition. So what I'm wondering is how quickly will figure robotics manage to surpass the competition as recently it seems like their robotics have achieved milestone after milestone with no signs of slowing down. Take a look at what the CEO of the figure company Brett Adcock says about where robotics is headed and why we definitely need a body for AGI. Yeah, I think um we really need to figure out a way to give like AGI a body here. Um I think it's like a really negative or like almost like dystopian future if we figure out how to solve AGI and it lives in a server somewhere and it's like you know more intelligent than all of human like and um the humanoid robot is like the ultimate deployment vector for AGI. It's um you can't solve this with anything else besides a human like a mechanical human. You need um you need something that is a single platform that with no hardware changes can do everything a human can and you need something that can also be good for the neural nets. Like the neural net here in a humanoid can basically learn like basically learn from transfer learning. Um it can basically multitask across a variety of different uh applications. So we basically can build like one single neural net like foundation model that can power the whole robot to do everything. And in now another company that is never out of the spotlight is of course Boston Dynamics and I think this recent robot demo from them is one that is truly impressive. We've always

### [28:40](https://www.youtube.com/watch?v=t5pwwCOyim8&t=1720s) Boston Dynamics Advances

known that Boston Dynamics has had probably the state-of-the-art robotics not only platform but also the updates that allow their robot to do a variety of different actions. But doing and seeing this recent update in practice was something that was absolutely incredible. I mean, I knew that the robot was advanced, but it's almost indistinguishable from a human in a kind of suit. And this is something that, you know, should really show you where we are in terms of robotics because cuz I don't think people understand that right now robotics is still in its very early days in terms of where things could head. Of course, there is still a lot to get to, but I think we have to understand that compared to AI, robotics is still in the early days. Once robotics are completely autonomous, doing a variety of things, reacting to the environment autonomously, it's going to be completely gamechanging. And the crazy thing about all of this is that Boston Dynamics aren't the only company doing this. There's also another company coming out of China that is also taking part in the robotics revolution. So, you may have heard of another company that is, you know, truly, you know, really changing robotics. And I'm almost speechless because every time I see an update from them, it just pushes the boundaries further and further in terms of what's capable. Now, the company I'm referring to here is one called Engine AI. The company released robots that was so realistic, the first time I released a video, many initial people thought that it was CGI. Now, of course, we now know that it is not CGI because of the countless physical demos and all of the videos that have plastered on social media and all of the media outlets that have taken to Twitter to show you in real life how cool it is. But I think it's really important to know that innovation is happening across the board in a variety of different companies. And it's quite likely year on year we're going to get consistent updates that show us just how incredible the robotics revolution is going to be. So with that being said, are you excited for this revolution or do you think this

---
*Источник: https://ekstraktznaniy.ru/video/13175*