# NVIDIA NemoCLAW!! - GTC 2026

## Метаданные

- **Канал:** Sam Witteveen
- **YouTube:** https://www.youtube.com/watch?v=NY2uwmX3uGc

## Содержание

### [0:00](https://www.youtube.com/watch?v=NY2uwmX3uGc) Segment 1 (00:00 - 05:00)

Okay, so yesterday was the Nvidia GTC 2026 keynote and Jensen took to the stage to announce a whole bunch of new things. They had new hardware. They even talked about taking the latest Vera Rubin modules and putting them into space. Yet none of those things were the biggest thing that got announced yesterday. by far was that Nvidia is joining the open claw game. and Jensen seemed genuinely excited to point out that OpenClaw is an open- source project that has gone from zero to the fastest growing open-source project in history in a matter of weeks. You can see here that the chart that they show comparing it to not only things like Facebook React and Linux which has been around for years and open claw has more GitHub stars than both of them in such a short time. Now, we know that every IT and enterprise team wants to basically have something like Open Claw in their organization. Recently on the Venturebe podcast, we hosted Harrison Chase and spoke to him who talked about exactly that, how all these teams were racing to see who could actually be the ones to put things like Open Core into production safely the fastest. And as Jensen was right to point out here, every IT team wants this, but almost none of them can safely deploy it. And this is where Nvidia just announced their answer to that problem. And this is what they're calling Nemo Claw because Nvidia loves to call everything Nemo. I still don't really understand what Nemo is versus half their other names, but hey, Nemo Claw sounds really cool. And what this basically is Nvidia's reference architecture for OpenClaw. So Jensen pointed out that this you can basically install with just a few commands in the terminal line and you're not only getting everything that's built into open claw, you're also getting a lot more that Nvidia brings to the table. So for example, we know that the traditional open core has things like file structure, memory, obviously access to LLMs and tools, but where it's always had problems has been around security and both security and an ecosystem seems to be what Nvidia's really focused on here. So on Sunday I attended a pre-briefing about all of this and one of the things that struck me as quite interesting was how they were talking about clause and that kind of makes sense now that we've seen that not only is openclaw out there we've got at least 50 different variations that are floating around on GitHub as people try to make their own versions of this and in many ways you can make your own version of this with something like claude code with something like lang chain langraph or even things like agent development kit from Google. The thing that made Open Clause so different though was in many ways its innovative sort of structure to just throw things to the wind and let a system have a lot of tools that can one be both very dangerous but two can also make you really productive. So clearly Nvidia has realized that in this age of claws that these things go way beyond just being purely assistants. They write code. They can browse the web. They call APIs. They can chain actions over hours without sort of human input. They have things like cron jobs that Jensen even mentioned on stage. Now, the upside of all this is massive. But unfortunately, so is the attack surface. I mentioned earlier that Harrison Chase joined us on the podcast. Even he pointed out that he wouldn't let his own staff run these on the company's computers. And I think this is where Nemo Claw actually comes in. So first off, you shouldn't think of Nemo claw as a competitor to open core. It's basically an enterprise wrapper around open core and other core style agents. And there are a couple of things here that Nvidia brings really powerful things to the ecosystem. So the first one that they're pitching are the Neotron models. So these allow you to run the model locally. You don't have to have data leave your infrastructure. And you can see if we come in and actually look at pinchbench which is a really interesting benchmark that basically measures how well different models work with open claw. If we click into openweight only models we can see that the neotron 3 super is actually at the top of that list beating out models like kim 2. 5 like glm5 like some of the quen models and even the miniax models. So it makes sense in this way if you think about it that Nvidia wants to sell compute. They want to get people using their GPUs if people will run these models locally for their own particular open clause. And they did announce that these will work on things like the DGX Spark and other sort of local workstations right out of the box. And even the containerized versions of this are going to work well in the cloud allowing people to basically run sort of self-contained mini clause with the LLM

### [5:00](https://www.youtube.com/watch?v=NY2uwmX3uGc&t=300s) Segment 2 (05:00 - 09:00)

attached. Now the second component that Nvidia's brought along here is OpenShell and you can think of this as an opensource security runtime. So think something like Docker but with YAML based policy controls for your agents. This allows companies to basically lock down a lot more of what databases a claw can access, what network connections it can make, what cloud service calls it can make and stuff like that. And once this policy is actually designed, anything outside of the policy is just automatically blocked. So this is going to allow organizations to lock down their sensitive data. And combined with local models, it means that they don't even need to send that data outside of their own environment. Now, it does seem to me that Nvidia is working on this idea and understood this problem perhaps long before open had even showed up with this. So, one of the things that Nemo claw is actually built on is this agent toolkit. And we can see from some of their partners that they're already rolling this kind of thing out. So, it's very interesting to see the company Box talking about using this for client on boarding, having separate sub agents that do things like invoice extraction, contract management, RFP sourcing. It's also interesting to see that they're actually talking about agent permissions that mirror employee permissions that just as you lock down certain users in your organizations, they're doing the exact same thing with their agents. Now, there's clearly a hardware play going on here. Clearly claws are always on. They need dedicated compute that doesn't compete with other workloads. And already Nemo claw targets both RTX PCs, RTX Pro workstations, DGX Spark, and even the new DGX station. And you're probably going to need one of these big workstations to be able to run the best open models that they're actually pitching. Along with the Neotron Super, they also talked about that they've just finished pre-training Neotron Ultra, which my guess is that we'll see coming out over the next few months and will probably be heavily post-trained specifically for these kind of tasks that Open Claw and the various claws like Nemo Claw are going to be aimed at doing. So, you got to bet that this is not just a software story that Jensen is talking about here. this is a clear way for them to sell a lot more hardware. So, if you want to find out more about this, you can actually come and look at the Nvidia docs that they've got for this. I do think it's really interesting to look at their design principles of how they're actually building security and access into this kind of thing. and OpenShell seems to be totally open so that you can just run this in a docker desktop here have it actually create a sandbox for you and then be able to set a network policy of what APIs you're going to allow what you're going to directly block that kind of thing. So I do think this is a really good contribution to the sort of way open clause are going to work. We got to think about how these things are going to play out and how it's probably not in our interest just to have open AI basically running open clause for everyone or anthropic or Google or any of the hyperscalers that you want to basically have the ability to have your very own customized version of this. Okay, just quickly the other big thing that was announced was the Nvidia Gro 3 LPU chips. So this just shows how quickly Nvidia has been moving to incorporate the Grock IP that they acquired at the end of last year and I think for a lot of us who ran open models on Grock, obviously the cool thing from that has been the speed. So that being combined into the new generation of these Nvidia supercomputers that are being rolled out probably means that we're going to see a lot faster tokens from some of the providers 6 months to a year from now. So just to finish up, the biggest takeaway I would say from the GTC keynote is the whole legitimizing of Open Claw and Open Claw going enterprise. And this dovetales really nicely with a lot of the things that we were talking with Harrison Chase about in the podcast of Open Claw being this moment where everybody realizes that yes, it's dangerous. Yes, you probably shouldn't give it access to your private data. But at the same time, it's opened up a lot of people to be able to see what can actually be done with agents that can learn a variety of different skills and operate 24/7 using those skills to actually do tasks for you. So, if you caught the keynote, let me know what your thoughts are in the comments. Do you think Nvidia getting in on Open Claw is a really good thing for it or is the real innovation going to come outside of companies like this? Anyway, as always, if you found the video useful, please click like and subscribe, and I will talk to you in the next video.

---
*Источник: https://ekstraktznaniy.ru/video/22372*