# Nvdia CEO Just Revealed The NEXT STEP In AI....

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=sgbxkxDFK9Q
- **Дата:** 07.01.2025
- **Длительность:** 21:40
- **Просмотры:** 92,734

## Описание

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

00:00 - Intro: Nvidia’s AI Evolution
00:21 - Physical AI Explained
01:10 - Nvidia Cosmos Platform
02:46 - World Models for Robotics
06:00 - Why Physical AI Needs More Data
07:02 - Isaac Groot for Humanoid Robots
09:10 - AI in Factories
12:02 - Autonomous Vehicles Revolution
13:30 - Nvidia Thor Processor
15:50 - Digital Twins for Safer Driving
20:40 - Scaling Training Data


Links From Todays Video:


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=sgbxkxDFK9Q) Intro: Nvidia’s AI Evolution

so Nvidia CEO Jen senen hang at CES actually gave us a real insight with as to what is the next step for AI it's not actually a gentic AI but rather physical AI you can see on this chart right here we actually have generative AI which is the first step then of course we have agenta Ki which is going to be happening largely this year and then of

### [0:21](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=21s) Physical AI Explained

course right at the top we have physical Ai and this is essentially embodied AI which is self driving cars and general robotics so take a quick look because this video is going to break down into why this is absolutely incredible and why this is the next evolution in AI the next Frontier of AI is physical AI model performance is directly related to data availability but physical world data is costly to capture curate and label Nvidia Cosmos is a world Foundation model development platform to Advance Physical AI it includes Auto regressive World Foundation models diffusion-based World Foundation models Advanced tokenizers and an Nvidia Cuda an AI accelerated data

### [1:10](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=70s) Nvidia Cosmos Platform

pipeline Cosmos models ingest text image or video prompts and generate virtual world States as videos Cosmos Generations prioritize the unique requirements of Av and Robotics use cases like real world environments lighting and object permanence the developers use Nvidia Omniverse to build physics-based geospatially accurate scenarios then output Omniverse renders into Cosmos which generates photoreal physically based synthetic data whether diverse objects or environments conditions like weather or time of day or Edge case scenarios developers use Cosmos to generate worlds for reinforcement learning AI feedback to improve policy models or to test and validate model performance even across multisensor views Cosmos can generate tokens in real time bringing the power of forite and Multiverse simulation to AI models generating every possible future to help the model select the right path working with the world's developer ecosystem Nvidia is helping Advance the next wave of physical AI now next this

### [2:46](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=166s) World Models for Robotics

is where Jen hang actually talks about how we actually need a sort of world Foundation model quite like how we have these you know foundational models for large language models and how they you know allow us to talk to these models in a natural language we're going to need something very similar to that for Robotics and this is essentially what they're trying to build with their Cosmos model speaking of Linux let's talk about physical AI So Physical AI imagine whereas your large language model you give it your context your prompt on the left and it generates tokens one at a time to produce the output that's basically how it works the amazing thing is this model in the middle is quite large has billions of parameters the context length is incredibly large because you might decide to load in a PDF in my case I might load in several PDFs before I ask it a question those PDFs are turned into tokens the attention the basic attention characteristic of a transformer has every single token find its relationship and relevance against every other token so you could have hundreds of thousands of tokens and the computational load increases quadratically and it does this that all of the parameters all of the input sequence process it through every single layer of the Transformer and it produces one token that's the reason why we need a Blackwell and then the next token is produced when the current token is done it puts the current token into the input sequence and takes that whole thing and generates the next token it does it one at a time this is the Transformer model it's the reason why it is so incredibly effective computationally demanding What If instead of PDFs it's your surrounding and the prompt a question it's a request go over there and pick up that you know that box and bring it back and instead of what is produced in tokens it's text it produces action tokens well that I just described is a very sensible thing for the future of Robotics and the technology is right around the corner but what we need to do is we need to create the effective effectively the world model of you know as opposed to GPT which is a language model and this world model has to understand the language of the world that has to understand physical Dynamics things like gravity and friction and inertia it has to understand geometric and spatial relationships cause and effect if you drop something and lost to the ground if you know poke at it tips over it has to understand object permanence if you roll a ball over the kitchen counter when it goes off the other side the ball didn't leave into another quantum universe that's still there and so all of these types of understanding as intuitive understanding that we know that most models today have a very hard time with so one of the

### [6:00](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=360s) Why Physical AI Needs More Data

things that most people might not know is that the reason that physical AI hasn't really caught up to the levels of you know these large language models is because of course we don't have enough data chat gbt and these large language models are trains on trillions and trillions of you know different pieces of text from around the world and so if we want to actually get humanoid robots that work we're going to need that data from somewhere so nvidia's project Isaac Gro is essentially the framework that they're using to scale up their data collection efforts and once this is something that becomes even more streamlined with future iterations it's going to become easier for us to collect more data and thus humanoid robots are going to start to rapidly increase their capabilities and we all know just quickly how Nvidia move which is why I'm so bullish on robotics for the next coming years it's quite likely to move a lot faster than you do predict this is something that you should pay attention to as once again quite like AI this is something that's going to affect us maybe not as soon as AI agents this year and next year but definitely in the very near future as they start to take roles certain factories automating a variety of P they couldn't before developers

### [7:02](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=422s) Isaac Groot for Humanoid Robots

around the world are building the next wave of physical AI embodied robots humanoids developing general purpose robot models requires massive amounts of real world data which is costly to capture and curate Nvidia Isaac Groot helps tackle these challenges providing humanoid robot developers with four things robot Foundation models data Pipelines simulation Frameworks and a Thor robotics computer the Nvidia Isaac Groot blueprint for synthetic motion generation is a simulation workflow for imitation learning enabling developers to generate exponentially large data sets from a small number of human demonstrations first Groot teleop enables skilled human workers to portal into a digital twin of their robot using the Apple Vision Pro this means operators can capture data even without a physical robot and they can operate the robot in a risk-free environment eliminating the chance of physical damage or wear and tear to teach a robot a single task operators capture motion trajectories through a handful of teleoperated demonstrations then use Groot mimic to multiply these trajectories into a much larger data set next they use group gen built on Omniverse and Cosmos for domain randomization and 3D tooreal upscaling generating an exponentially larger data set the Omniverse and Cosmos Multiverse simulation engine provides a massively scaled data set to train the robot policy once the policy is trained developers can perform software in the loop testing and validation in Isaac Sim before deploying to the real robot the age of General robotics is arriving powered by Nvidia Isaac Groot

### [9:10](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=550s) AI in Factories

we're going to have mountains of data to train robots with Nvidia Isaac group this is our platform to provide technology platform technology elements to the robotics industry to accelerate the development of General robotics now if you're looking or wondering just exactly how this kind of AI is going to be revolutionizing the factories and the country take no further to look at this example that they provide where they're already maximizing efficiency in this Factory by having these simulations various processes running within the background it really shows us how transformative AI technology is going to be once it's integrated into several parts of the economy Keon the supply chain solution company Accenture a global leader in Professional Services and Nvidia are bringing physical AI to the $1 trillion warehouse and Distribution Center Market managing high- Performance Warehouse Logistics involves navigating a complex web of decisions influenced by constantly shifting variables these include daily and seasonal demand changes space constraints Workforce availability and the integration of diverse robotic and automated systems and predicting operational kpis of a physical Warehouse is nearly impossible today to tackle these challenges Keon is adopting Mega an Nvidia Omniverse blueprint for building industrial digital twins to test and optimize robotic fleets first Keon's warehouse management solution assigns tasks to the industrial AI brains in the digital twin such as moving a load from a buffer location to a sh storage solution the robots brains are in a simulation of a physical Warehouse digitalized into Omniverse using open USD connectors to aggregate CAD video and image to 3D Light Art to point cloud and AI generated data the fleet of robots execute tasks by perceiving and reasoning about their Omniverse digital twin environment planning their next motion and acting the robot brains can see the resulting State through sensor simulations and decide their next action the loop continues while Mega precisely tracks the state of everything in the digital twin now Keon can simulate infinite scenarios at scale while measuring operational kpis such as throughput efficiency and utilization all before deploying changes to the physical Warehouse together with Nvidia Keon and Accenture are Reinventing industrial autonomy f is

### [12:02](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=722s) Autonomous Vehicles Revolution

that that's incredible everything is in simulation in the future every Factory will have a digital twin and that digital twin operates exactly like the real factory and in fact you could use Omniverse with Cosmos to generate a whole bunch of future scenarios and you pick then an AI decides which one of the scenarios are the most optimal for whatever kpis and that becomes the programming constraints the program if you will the AIS that will be uh deployed into the real factory next thing that I think most people are discounting in AI is actually the future of autonomous driving now most people haven't experienced the whmo but trust me when I say this is a company that is rapidly scaling their offerings in terms of their driver rings they're going to be expanding to more and more cities throughout the coming years and so far from what I've heard that you know when I speak to people about weo that have actually experienced it they say that this is you know the best driving experience that they've ever had and this is something that Nvidia seeks to you know be a part of because of course with AI it's going to impact a variety of different categories and I wouldn't be surprised in the future if autonomous driving becomes more and more common practice I mean when I've spoken to people that like you know what I don't have to interact with a human they're not playing any music it's quiet it's fine it's easy there's no harsh driving so this is going to be something that will change the future of our economies of our roads and we can expect these cars to start scaling through our cities near sometime in the future the next example autonomous

### [13:30](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=810s) Nvidia Thor Processor

vehicles the AV revolution has arrived after so many years with weo success and Tesla's success it is very clear autonomous vehicles has finally arrived well our offering to this industry is the three computers the training systems to train the AIS the simulation systems and the and the synthetic data generation systems Omniverse and now Cosmos and also the computer that's inside the car each car company might work with us in a different way use one or two or three of the computers we're working with just about every major car company around the world weo and zuk and Tesla of course in their data center byd the largest uh EV company in the world jlr's got a really cool car coming Mercedes because a fleet of cars coming with Nvidia starting with this starting this year going to production and I'm super pleased to announce that today Toyota and Nvidia are going to partner together to create their next Generation AVS just so many cool companies uh lucid and rivan and xiaomi and of course uh Volvo just so many different companies wabby is uh building uh self-driving trucks Aurora uh we announced this week also that Aurora is going to use Nvidia to build self-driving trucks autonomous 100 million cars build each year a billion cars vehicles on a road all over the world a trillion miles that are driven around the world each year that's all going to be either highly autonomous or you know fully autonomous coming up and so this is going to be a very L very large industry I predict that this will likely be the first multi-trillion dollar robotics industry this IND this business for us um notice in just a few uh of these cars that are starting to ramp into the world uh our business is already $4 billion and this year probably on a run rate about $5 billion so really significant business already this is going to be very large well today we're announcing that our next generation processor for the car our next generation computer for the car is called Thor I have one right here hang on a

### [15:50](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=950s) Digital Twins for Safer Driving

second okay this is Thor this is a robotics computer takes sensors and just a Madness amount of sensor information process it you know een te cameras high resolution Radars Liars they're all coming into this chip and this chip has to process all that sensor turn them into tokens put them into a Transformer and predict the next PATH and this AV computer is now in full production Thor is 20 times the processing capability of our last generation Orin which is really the standard of autonomous vehicles today and so this is just really quite incredible Thor is and full production this robotics processor by the way also goes into a full robot and so it could be an AMR it could be a human or robot it could be the brain manipulator uh this processor basically is a universal robotics computer the second part of our drive system that I'm incredibly proud of is the dedication to safety drios I'm pleased to announce is now the first softwar defined programmable AI computer that has been certified up to asold D which is the highest standard of functional safety for automobiles the only and the highest and so I'm really proud of this asld ISO 26262 it is um the work of some 15,000 engineering years this is just extraordinary work and as a result of that Cuda is now a functional safe computer and so if you're building a robot Nvidia Coulda yeah and now we actually get to take a look at the Nvidia digital twins which is once again how they're scaling up the data collection efforts in order to make these vehicles a lot more accurate and a lot more safer it's truly incredible considering the fact that in the future we're likely going to have you know millions and even billions of scenarios and situations that we probably will never encounter in the real world but the AI will have been trained on 10,000 examples so it's probably going to be better lots and lots better than the average driver which is remarkable when you think about it the autonomous vehicle Revolution is here building autonomous vehicles like all robots requires three computers Nvidia dgx to train AI models I'm iers to test drive and generate synthetic data and drive agx a supercomputer in the car building safe autonomous vehicles means addressing Edge scenarios but real world data is limited so synthetic data is essential for training the autonomous vehicle data Factory powered by Nvidia Omniverse AI models and Cosmos generates synthetic driving scenarios that enhance train data by orders of magnitude first omnimap fuses map and geospatial data to construct drivable 3D environments driving scenario variations can be generated from replay Drive logs or AI traffic generators next a neural reconstruction engine uses autonomous vehicle sensor logs to create high fidelity 4D simulation environments it replays previous drives in 3D and generates scenario variations to amplify training data finally edify 3DS automatically searches through existing asset libraries or generates new assets to create Sim ready scenes the Omniverse scenarios are used to condition Cosmos to generate Mass amounts of photorealistic data reducing the Sim to real Gap and with text prompts generate near infinite variations of the driving scenario with Cosmos neotron video search the massively scaled synthetic data set combined with recorded drives can be curated to train models nvidia's AI data Factory scales hundreds of drives into billions of effective miles setting the standard for safe and advanced autonomous driving is that incredible

### [20:40](https://www.youtube.com/watch?v=sgbxkxDFK9Q&t=1240s) Scaling Training Data

we take thousands of drives and turn them into billions of miles we are going to have mountains of training data for autonomous vehicles of course we still need actual car cars on the road of course we will continuously collect data for as long as we shall live however synthetic data generation using this Multiverse physically based physically grounded capability so that we generate data for training AIS that are physically grounded and accurate and or plausible so that we could have an enormous amount of data to train with the AV industry is here uh this is an incredibly exciting time super super uh excited about the next several years I think you're going to see just as computer Graphics was revolutionized such incredible pace you're going to see the pace of Av development increasing tremendously over the next several years

---
*Источник: https://ekstraktznaniy.ru/video/13425*