# Big AI News: Claude 4 Details, GPT-5 Details, Googles New Video And Image Models, Robots and more...

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=jJFza9o_BUI
- **Дата:** 15.05.2025
- **Длительность:** 31:09
- **Просмотры:** 40,187
- **Источник:** https://ekstraktznaniy.ru/video/12784

## Описание

Discover the AI-powered smart cleaning with the Roborock Saros Z70 from @Roborockglobal – Get $600 off between May 12 -18 via Amazon: https://bit.ly/4lVWXH9 and Roborock website: https://bit.ly/42CqM8i
#roborock #robotvacuum #sarosz70 #vacuum #artificialintelligence

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

00:00 – Claude Returns Soon
00:46 – Thinking Model Breakthrough
02:04 – Claude Writes Code
03:29 – Neptune in Testing
04:21 – Robotics Revolution Begins
06:06 – AI-Powered Vacuums
08:00 – Industrial Humanoids Rise
09:14 – Voice Scam Threats
11:02 – Meta’s Four Innovations
13:15 – Google’s 3D Shopping
14:48 – Gemini Outperforms Claude
16:07 – Self-Improving AI
18:03 – AI Coding Agents
19:59 – Laptop Medical Model
22:09 – GPT-5 Balancing Act
23:14 – AI Junior Engineers
24:25 – AI Multiplayer Games
26:09 – AI Skill Profiles
27:02 – Musk’s Future Vision
29:09 – Indu

## Транскрипт

### Claude Returns Soon []

So, let's not waste any time. Let's get into one of the first news stories. Anthropic has actually two new versions of its models. And I think a lot of you guys are going to be happy because Claude Opus was the previous model set that was kind of discontinued. And apparently, this is going to be released in the coming weeks according to two people who have used them. And apparently this is going to be a different kind of models because what makes these models different from existing reasoning models is their ability to go back and forth between thinking. So right now we are in that paradigm where we have the standard models the GPT series models just simply output the first thing that comes to their brain. Then we have the models that think over time and of course then

### Thinking Model Breakthrough [0:46]

respond. But it looks like we're now going to enter a third paradigm which is a hybrid of the two. And this is what Anthropic is reportedly building. We can see here that they are exploring, you know, this different way to solve a problem, the ability to use external tools, applications, and databases. And so apparently now if it gets stuck, it can go back to the reasoning mode to think about what's gone wrong and then self-correct, one of the people said. So, so far I have never seen this in a model just yet. And I wouldn't be surprised if this is really effective because over time we've seen that things that people don't try are often the most effective. So when you think about how we reason with problems with models, usually we will talk to the model, the model will reason, think with itself, give an output, then we'll go back to having that pretty basic input from us and then the model will go off and start thinking again. And so overall, I think this is rather interesting because this could potentially unlock new capabilities and it might actually offer us the ability to see what models can do over a much long, you know, horizon. When it comes to long horizon tasks, this could be one of the ways that, you know, they potentially allow AI systems to reason for longer. So, it's going to be really interesting to see what comes up. And I'm really excited to see the fact that they are bringing back the Claude Opus series. A lot of people did love the series, so maybe they finished training this model. I mean super

### Claude Writes Code [2:04]

interesting from Anthropic. Of course, as well, one of the things that I saw this week, Anthropic, the lead engineer, Boris Shernney, spoke about how 80 to 90% of the code that they use is written by Claude. Some tasks still require handwritten codes, but 80 to 90% is pretty high. So, um this is a short clip. I mean honestly was just quite surprising because I remember looking at the last two to three days when I saw clips from Google and Microsoft where they explicitly stated that it was around you know 20 to 30% and I think even Meta ushered some similar claims. So Anthropic either have an incredible coding model which they obviously do um or I'm guessing they just have a more efficient use of it. Pretty high probably near 80. Very high. Yeah. A lot of human code review though. Yeah, what a human code review. I think some of the stuff has to be handwritten and some of the code can be written by Claude and there's sort of a wisdom in knowing which one to pick and what percent for each kind of task. So usually where we start is Claude writes the code and then if it's not good then maybe a human will dive in. There's also some stuff where like I actually prefer to do it by hand. So it's like you know intricate data model refactoring or something. I won't leave it to quad cuz I have really strong opinions and it's easier to just do it and experiment than it is to explain it to Quad. So yeah, I think that nets out to maybe like 80 90% quadridden code overall. Now that wasn't

### Neptune in Testing [3:29]

the only thing coming out of Claude. They do have a model in testing called Claude Neptune. This was found by testing catalog. I personally do think that this is probably a code name for the new model. It's quite likely that they wouldn't just have the name there. We've seen that in the past. There have been code names such as Dragonfly or, you know, Nebula, you know, these really cool names. And one actual meme that goes on is that a lot of times these companies manage to figure out cooler names than actually naming the model. I mean, if we look at the recent model series, the GPT series, we know that has gotten particularly complex. So, yeah, this is something that is quite likely to release within the next 2 to 3 weeks. Now, if we're talking about things releasing, there was actually a rather interesting breakthrough in robotics. Right now, the screen is probably black, but that will show you because there's a video about to play, which is just absolutely incredible. AI has already changed the

### Robotics Revolution Begins [4:21]

digital world, but to live among us, it needs to move beyond the internet data into the chaos of reality. So, what's missing? The ability to reason through our worlds, just like we do with an intuitive understanding of physics. We learn that from birth by interacting with the world, but AI doesn't. Methods like reinforcement learning and behavior cloning can learn specific tasks, but they don't handle new situations well. They don't understand how the world works. They just copy behavior or learn by trial and error. Enter latent space model. They simplify this messy real world data into abstract maps. Think a bit of it as AI building its own understanding of reality. Just like our brain does, deep variational base filters or DVBFs take this one step further, learning the laws of motion without being spoonfed each and every example. DVBF encodes sensory inputs like a robot's camera or touch sensors into a latent space and using Beijian inference they update their beliefs about a world as new data comes in then encode predictions to act. It's a bit like giving AI a sense of imagination. Unlike behavior cloning or traditional neural networks, DVBFs don't just generalize from data. They understand the why behind action. They need much less data, adapt on the fly, and can predict what's next. For me, that's like AI finally grasping the rules of the game, and not just following the playbook. And so, yeah, what you guys saw there was the company foundation robotics talking about basically the chat GPT moment for robotics. And honestly, I've spoken about robotics so much. I think robotics is one of the most underrated things that is going on right now in AI. I think it's because right now we don't have any fancy demos that people can show off. But the main

### AI-Powered Vacuums [6:06]

things that have been, you know, holding robotics back such as not being able to understand the environment, you know, looking at new environments and then actually, you know, doing new things that it, you know, is essentially not in the training data. you know, these things are progressively getting solved and I really don't think that it's going to be that far away when we have general, you know, humanoid robots that are able to do a vast amount of different, you know, jobs and tasks in the economy, which, you know, it's going to be super interesting to see how our economy evolves because this is going to be like a new entire labor class/ labor force that is, you know, working 24/7. So maybe society is about to start evolving in, you know, a really just rapid way that we've never seen before. Speaking of robotics, if you've ever wondered how far Smart Home technology has come, let me introduce you to today's sponsor, the Robbo Rock SOS Z70. This innovative robotic vacuum is equipped with the industry's first AI powered mechanical arm. Genuinely transforming what home cleaning technology can do. The SOS Z70 isn't just about vacuuming your floors. It intelligently picks up and organizes household items, significantly reducing clutter and improving overall efficiency. Its unique OmniGrip technology allows it to gently handle objects, meaning fewer interruptions and more thorough cleaning. Powered by advanced AI, the SOS Z70 rapidly learns your home's layout, expertly navigates around furniture, and adapts in real time to changes in your living environment. This isn't just tech for the sake of tech. It's practical innovation designed to simplify daily routines. Beyond the basic functionalities, the Saros Z70 integrates seamlessly into your existing smart home systems, enhancing convenience without unnecessary complexity. And if you're skeptical about whether robotic vacuums have genuinely evolved, the SOS Z70 might just surprise you. Check it out yourself via the link in the description. Thanks

### Industrial Humanoids Rise [8:00]

to Robbo Rock for sponsoring this video. Now, continuing to talk about robotics, we have to talk about this company. Persona AI is the labor platform for tough, skilled industrial work. Whether the work site is a shipyard, energy infrastructure, construction site, or another dynamic environment, humanoids are going to be doing the majority of the work. Now, what I like here is that they have all of these humanoids that are essentially somewhat modular in the, you know, ability to be able to transform them for specific tasks. You can see the welder, the fabricator cuts, sharps, shapes, and finishes metal work under tough conditions. The assembler performs assembly in tough conditions. I mean, I've seen so many different, you know, movies that depict the future that, you know, has these humanoid robots on factories just doing stuff, but it seems like every single day we get closer and closer to actually realizing this as something in our world. And I know it seems super weird, but you know, the robotics breakthroughs are continuing to happen whether we like them or not. And it seems like this is going to be something that will be, you know, a part of our future, especially if you're young right now. I mean, I can't imagine what the world looks like in 50 years when these things are just, you know, arguably 10 times better than us, faster than us

### Voice Scam Threats [9:14]

more efficient than us. I mean, what does the world truly look like? Now, the bad things about AI is this one right here. So, unfortunately, the FBI has been warning of AI voice messages impersonating top US officials. And this one is kind of crazy because we all know that like you know AI is being used to fish people. in different campaigns to you know convince people to do this and to do that. But like what happens if AI has gotten so good that it's able to impersonate top United States officials or government officials and things get leaked? Now before you know there was of course you know security that you could have you know different hacks and stuff like that but if someone calls you right and has your code word you know let's say someone you know is like in like a spy or something I honestly don't know how all of that stuff works. I could sit here and tell you that I do but the point I'm trying to make is that like this is now a really hard problem to face because not only are you having AI voicemails and AI voices you have AI faces you've got different texts. you've got large language models that can text, you know, super like that person. So, security is going to have to get even better. So, we have to think about it. Like, if the FBI is struggling with this problem, can you imagine what the average person has to deal with? I mean, honestly, guys, when you have, you know, really sensitive information and data, please always try to double and triple check that whoever you're talking to, you know, even if it's on video, cuz I saw another video clip of a real live webcam that just completely just blows everything out of the water. You have to just double and triple check everything because these scams are only going to get better and this is the worst they will ever be. And they talk about how the scammers were, you know, sending text messages, AI generated voice messages in an effort to essentially establish rapport before gaining access to personal accounts. So please, please just double check everything before you send over any sensitive data. Now, Meta

### Meta’s Four Innovations [11:02]

also introduced four new releases. meta have actually gotten, you know, under the, you know, fire, I guess you could say, because their release, their recent releases haven't been what mainly people would expect. But just because, you know, certain areas aren't going that well, it doesn't mean that Meta aren't pushing the industry forward as a whole when it comes to the overall advancements in AI, Meta still have several key innovations under their belt, and they just released four of them. Hello everyone. Today, we're excited to share some groundbreaking advancements from Meta Fundamental AI research team. These releases underscore our dedication to advanced machine intelligence through focus scientific and academic progress. First, we're introducing the Open Molecules 2025 data set and Meta's universal model for atoms. This model and data set combination enables exceptional speed and accuracy for modeling the world at the atomic scale, accelerating the discovery of new molecules and materials. By making open molecules and universal model available, we're enabling researchers to drive innovation in fields such as healthcare and mitigating climate change. Next, we're releasing a sampling, a highly scalable algorithm for training gener models from only scalar rewards without access to any reference data. Agent sampling achieves impressive results on molecule generation using only large scale energy models. To encourage further research, we released a new benchmark that can directly impact progress in AI through chemistry. Finally, in collaboration with the Rothschild Foundation Hospital, we're unveiling a large scale study that maps how language representations emerge in the developing brain. This research offers new insights into the brain basis of language development and shows its parallels with large language models, paving the way for future breakthrough in AI and neuroscience. By making our research widely available, we aim to foster an open ecosystem that accelerates progress and drives innovation. I encourage you to explore our full blog post for more details and together let's push the boundaries of AI research to solve the big scientific questions about human and machine intelligence. Thank you. So now Google

### Google’s 3D Shopping [13:15]

has been quietly building something rather fascinating that will probably change your online shopping experience. They built something that quietly is probably going to become something that you use every day. I mean they built an AI system that basically takes three regular photos of a product then turns it into a fully immersive 3D shoppable experience. So imagine you're shopping online. Normally you're stuck with flat images, maybe a 360 spin if the brand is, you know, spending actual money on those pictures. But of course, thanks to Google's video model VO, it can actually generate ultrarealistic 360 videos of the product. So you can actually understand how light reflects off the materials, how the shadows move, and how the geometry changes in different angles. So, this is really cool because they basically have built something that is really going to change the game when it comes to the online experience. And it's moments like these and innovations like these that we really don't notice. But overall, you can see that AI is trickling into almost every single industry. Now, that wasn't just the only thing that Google managed to do. They actually managed to do something insane. They managed to outdo themselves once again. So, remember how it was like one or two weeks ago where Google released Gemini 2. 5 Pro, which was arguably the best model ever in terms of, you know, pretty much every category, even on the LMSYS and even on major benchmarks. But what was crazy about this is that they also released Gemini 2. 5 Pro Preview IO Edition, which is essentially a note towards their IO conference, which is coming up in around 1 week's time on the 21st. So, it's

### Gemini Outperforms Claude [14:48]

pretty crazy because what we have here is an AI system that is now even better than Claude 3. 7 Sonnet on coding tasks. And I've actually used it to code quite a few things. And I do notice this myself. So, it's pretty crazy that Google are now basically state-of-the-art in basically every single area. And I don't know who can dethrone Google because the amount of innovations that is coming out of that company is honestly just so surprising that I didn't even realize that they were this good and they had been this dedicated because I do remember when Google were actually behind for quite some time. So they have seriously turned themselves around and managed to get you know well ahead of the competition. they are, you know, 147 ELO points about claw 3. 7 Sonnet. But remember, as we discussed earlier, they will also be releasing a model very shortly in a few weeks now. And this is a demo of that new Google model. One of the fun ways to test out a new model is to see how it improves existing applications. On the right, you can see the old 2. 5 Pro and a fairly basic app. On the left, you can see it actually understood the video in depth and then made a fully functional quiz. This took the experience to the next level and that is exactly what we hope you do with the new Gemini 2. 5 Pro. Of course

### Self-Improving AI [16:07]

upcoming Google IO, everyone's excited. They have also said that there we also managed to find that Google could be preparing new models such as VO 3. 0 and Imagin 4. 0. So, this was found in the code by testing catalog. Honestly, V2 is just absolutely outstanding. I use it all the time. It's just really a crazy model. And if there is Imagen 4. 0, I'm honestly not even sure how Imagen 4. 0 is going to be because Imagigen 3 is actually really good. Like it's one of the models that is so underrated because it just doesn't get talked about. You've got GPT, you know, 4 image generation, which is of course really good with infographics and stuff like that. But, you know, Imagine 3 I think is a really good model, but I think Imagin 4. 0 I think maybe it might just steal the crown from GPT4 image. So definitely look out for that because that's going to be there. And remember how I said Google are continuing to innovate constantly. There was also this alpha evolve. So this one was absolutely insane because I'm going to make a video about this later. I've got like a full video on this, but the long story short was that Alpha Evolve is essentially completing the recursively self-improving loop. So, you know how they say AI doesn't recursively self-improve, but this entire AI system that they built is basically something that made breakthroughs in mathematics and the breakthroughs that it managed to make. You know, because it's like a coding agent, it can just make, you know, optimize the best code, it managed to speed up Gemini's training runs by 1%. So, it's going to be absolutely insane because now we've essentially got this entire self-improving system that's not, you know, 24/7 self-improving, but over time things are continuing to get faster. So it kind of feels like the singularity is fast approaching. And additionally, Google head of IO have been demonstrating to employees and outside developers an array of products, including an AI agent for software development, known internally as software development life cycle agent.

### AI Coding Agents [18:03]

And it's intended to help software engineers navigate every stage of the software process from responding to tasks to documenting code. According to three people who have seen demonstrations of the product or have been told about it by Google employees and they've described it as an always on co-worker that can help identify bugs to fix or flag security vulnerabilities. So they are not, you know, close to releasing this because they, you know, they said they're not really sure about, you know, if they're going to release this, but it's potentially going to be revealed at Google IO. And the reason I actually included this story is because they're not the only company that is working on this. We actually do know that open eye employees have also been testing an agentic software engineer for months and the company has even been demonstrating to outsiders for months this exact thing. So executives such as CFO Sarah Frier have even talked about it publicly. But so far all we've seen is the codec cli and open-source coding assistant that the company released in April. So it's going to be really interesting to see if Google releases their aentic software engineer. Will open air respond? We know that historically this is what they've tried to do. They've always tried to steal Google's thunder. But I feel that, you know, over time it's definitely become harder and harder to do that. So, it's going to be super interesting to see where that goes in the future. If OpenAI is still going to be playing that game or if they're going to be completely going their own route and I think OpenAI, they're probably not going to try and contend with Google for too much. They're probably just going to continue trying to, you know, gain more customers. And I probably should have included in this video that they managed to buy Windsurf at a $3 billion valuation. So open once again focusing on products. Now Emad Mustak, the previous CEO of Stability AI, the company that produced the model stable diffusion actually made a medical model that outperforms chat GBT and works on any laptop. So they essentially said that they're building a full AI first stack for health that will enable universal health knowledge. Now, I think

### Laptop Medical Model [19:59]

this is really impressive because you have to understand that like if you can get a model to truly perform well at these benchmarks and it works well on a laptop, it's going to completely change the game. And I think, you know, healthcare, you know, access to really good healthcare and that kind of knowledge is going to be really important for pushing society forward as things continue to progress. So, the model that they actually built is called medical 8b and it's so compact. It's literally just got, you know, 8 billion parameters. And as I said before, since it's so small, it's efficient enough to run on your own hardware, which means no cloud costs, no privacy worries, just pure doctor level insight in your pocket. Now, this was trained on over half a million samples, cleaned, filtered, clustered, and optimized for one thing, trustworthy medical reasoning. And we're not talking about some vague AI assistant guessing answers. We're talking about, you know, real step-by-step logic trained on curated, decontaminated data sets. Now, it isn't cleared for clinical use just yet. It's already beating some of the models out there on benchmark suites like Healthbench, MedQA, PubMed QA, and more. So, it's pretty crazy because I, you know, think in the future that healthcare, at least decent healthcare, is going to be pretty easy to access for most of us. And I think it's going to be really good because a lot of the times we go to the doctors and we want to just ask a million questions. And now with language models, that's going to be something that we can easily have. You know, I mean, if you're in the UK, I'm just going to say this. It does take quite a long time to get a doctor's appointment. I'm not sure about other countries, but the public health care system in this country isn't the best when it comes to absolute speed. Of course, it is free, but it most certainly could be more efficient. Now, additionally, there was some information regarding GPT5. Now, this one was kind of interesting because remember how I said Claude for or Anthropic's new models like Claude Opus, they're essentially going to be using both reasoning and the static version of the model. OpenAI, the researcher Michelle Porcass has actually said that, you know, the challenge in building GPT5 is actually finding the right balance between reasoning and conversation. Like 03 thinks really hard, but that isn't ideal for a casual chat. And GPT 4. 1 improved the coding by, you know, sacrificing some chitchat level quality.

### GPT-5 Balancing Act [22:09]

So the goal is basically to figure out a model that can do all of that. You know, 03 has a very different skill set. Um it can think through problems really hard. You don't really want the model to think for five minutes when you say hi. Um and so I think the real challenge facing us on post training and research more broadly is like combining these capabilities. So you know training the model to be like just a really delightful chitchat partner but also know when to you know reason. Uh and this kind of plays into 4. 1 a bit. I mentioned that we downweighted some of the chat data and upweighted coding to make coding better. Um, so there are some like zero sum decisions in that sense uh where you have to figure out what exactly you're tailoring the model for. So that's the real challenge in GBD5 is like how do we strike this right balance. Now something that was also rather intriguing, you know, remember how we spoke about earlier in the video the fact that you know coding is going to be automating a lot of work. Google chief scientist Jeff Dean spoke about the fact that we will have AI systems operating at the level of junior engineers within a year. How far do you

### AI Junior Engineers [23:14]

believe we are from having an AI operating 24/7 at the level of a junior engineer? Not that far. Yeah. Is that six weeks or six years or every year an AI seems like a dog seven or something? I will claim that's probably possible in the next yearish. And I don't think people really grasp just how quick one year is. Like the last year, I don't know for you guys, but that just flew by. So one year in AI time, I mean in AI time technically it's a lot, but on the worldwide global scale, one year is not a lot of time. So that is a remarkably fast pace of technological improvement when it comes to AI that most fields just really can't fathom. Now something else that I found to be super crazy was the fact that there is this game called the multiverse. So basically AI generated games are now multiplayer which honestly I don't even know how that makes sense but it's pretty crazy because um you can literally play AI generated games. I mean AI generated games are still in their infancy but yet now we can literally have multiplayer.

### AI Multiplayer Games [24:25]

So it's pretty crazy because most models today, you know, are built for single player stuff. One input, one action, one output. But real world tasks like driving, sports, teamwork, you know, they need shared experiences. And if player A bumps into player B, both players should see that happen from both their own views. And AI has to make it look real for both sides. So Multiverse solves this with a clever trick. It combines both player views into one big image, and then it processes them together, then predicts what happens next for both players. And that just keeps everything consistent, like both players seeing the same crash from different angles. So they basically used gameplay footage from Gran Turismo 4. Then they built a full training set of races. They then they basically reverse engineered the game to extract steering, brake, and throttle input by watching the HUD. They didn't even have to sit and watch it play. They built a builtin bot mode called Bspec to generate tons of gameplay automatically. So I mean the AI generated area for gaming development is one that is still in its infancy but we're seeing that you know now we can have AI generated games. A lot of them have AI generated multiplayer. I mean I really do wonder what the future of gaming looks like when this is you know more mainstream and the quality you know is really there. Like imagine you can just put in a single prompt and you can begin playing an entirely new video game. I mean it's crazy. Now there was also something rather interesting. I know this isn't the most captivating image, but basically Microsoft just dropped a new evaluation model called ADLE, and it's honestly a gamecher because most AI benchmarks just check if the model got the right answer, like a yes or no. But this evaluation method breaks tasks into 18 different ability types, things like attention, memory, logic, science, and knowledge, even how common the task is online. And then it been then it builds a kind of ability profile for each model, like a skill

### AI Skill Profiles [26:09]

chart. That means instead of just saying like GBT4 did well, we can actually say why it did well. Maybe it has strong reasoning but it has weak memory. And this actually lets researchers predict if a model will fail on something long before they run the test. So this system was, you know, tested on over 16,000 samples across 63 tasks and it was accurate at 88% accuracy at predicting performance and for models like GBT40 and Llama 3. 1. And what's crazy is that it also showed that many of today's AI tests are flawed, like the civil service exam benchmark. Apparently, some of those just test meta cognition and niche knowledge. So overall, I think this is a really good step for the AI industry because we're going to get a lot of information and a lot more detailed benchmarks that can really push the models forward. Now, we also got Elon Musk actually stating that we're headed for a completely different future. It isn't the first time he said this, but considering the recent robotic developments, it's also not something to

### Musk’s Future Vision [27:02]

ignore. I think we're headed to uh a radically different world. I think a a good world, an interesting world. Um my prediction actually for humanoid robots is that ultimately there will be um tens of billions. Um I think everyone will want to have their personal robot. You can think of it like uh as though you had your own personal C3PO or R2-D2 or but even better then who wouldn't want to have their own personal C3PO R2-D2? That would be pretty great. Um and uh I also think it unlocks an immense amount of economic potential because when you think of like what is the output of an economy, it is productivity per capita times population or capita. The uh once you have humanoid robots, the actual economic output potential is tremendous. Uh it's really unlimited. Um potentially we could have an economy 10 times the size of the current global economy where uh no one wants for anything. Um you know sometimes in AI they talk about universal basic income. I think it's actually going to be universal high income. Um where anyone can have any goods or services that they want. Um, you know, there obviously are some risks, you know, um, which illustrate perhaps the if we don't do this right, you know, you could have like a James Cameron sort of movie, um, you know, Terminator. Um, we don't want that one. Um, but, uh, but having sort of a Star Trek future would be great where we're out there exploring the stars, discovering the nature of the universe. um and um and a level of prosperity and hopefully happiness that we uh can't quite imagine yet. And we also got Jenna

### Industry Reinvention Begins [29:09]

Singh Huang basically saying the same thing as Elon Musk that we are headed towards entire industries being reinvented. We observed almost a decade and a half ago reasoning from first principles of computer science that deep learning and the algorithms associated with deep learning has a real chance of scaling into an approach to solve many different types of problems. In the last 10 years, we've advanced computation and computation scale by nearly a million times. Moore's law would have been a hundred times and yet we were able to scale computation by extraordinary levels leading to uh some incredible breakthroughs in artificial intelligence. We have not only reinvented computing, we have now reinvented what computers can do. And as a result of that, every single industry is affected. This is now going to be the single largest technology breakthrough that the world's ever known. Every industry will be affected. Every single industry will be impacted. revolutionized starting from an idea just some 30 years ago sticking with it reinventing computing and now reinventing every industry and so it is uh with great humility that I'm accepting this award in the name of Thomas Edison is it's really terrific to yeah thank you it's terrific to be recognized for something that you believe in and love doing. Um, but mostly it's terrific to be recognized for an award that is at the core and at the heart of uh all of the employees that I represent at NVIDIA. So, thank you very much.
