# NVIDIA’s New AI: Impossible Weather Graphics!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=ZE0JZYgiaGc
- **Дата:** 26.05.2025
- **Длительность:** 8:16
- **Просмотры:** 67,606
- **Источник:** https://ekstraktznaniy.ru/video/12357

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambda.ai/papers

Guide for using DeepSeek on Lambda:
https://docs.lambdalabs.com/education/large-language-models/deepseek-r1-ollama/?utm_source=two-minute-papers&utm_campaign=relevant-videos&utm_medium=video

📝 The papers are available here:
https://research.nvidia.com/labs/toronto-ai/WeatherWeaver/
https://research.nvidia.com/labs/toronto-ai/DiffusionRenderer/

Source: https://www.youtube.com/watch?v=CVdtLieI5D0

📝 My paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Benji Rabhan, B Shang, Christian Ahlin, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Michael Tedder, Owen Skarpness, Richard Sundvall, Steef, Taras Bobrovytsky,

## Транскрипт

### Segment 1 (00:00 - 05:00) []

What you are seeing here is basically impossible to do yet it just happened. So how and what is going on here? Well, this is a brand new AI technique where in goes your video and it changes the weather effects on it. No 3D modeling required, no physics simulation required. Also, no camera calibration required. Crazy. And then you shall hear about the legend of the apple that went into another dimension and how to solve this problem and more. And I said impossible because look this is what happens when you try previous techniques to do that. In goes this video then little AI add some to it. Oh my I can't see anything. And it gets worse. It is not even realistic. This NEV2V technique was published less than a year ago, not some ancient technique. Okay, now a different work. Ouch. They clearly don't understand how to do this properly. It seems absolutely impossible until this new work. Look at that. Wow. This is crazy good. And not only at fog rain also not a problem. Look at that beauty. It makes any place look like London. Kidding. Snow synthesis is also incredibly amazing. Look at how cozy this is. Loving it. Okay. But wait a minute. You can take any technique, choose that one scene it does really well on and advertise only that. What we fellow scholars want to know is whether it is actually a practical technique. Does it work on a variety of scenes? And you bet your papers it does. Look at that. A huge variety of scenes and all of them are absolute beauties now. And just think about training self-driving cars. They could experience countless weather scenarios safely in a simulation thanks to this. So this was weather synthesis. Now get this. What about weather desynthesis? Yes. What about removing weather effects? Now, that is an entirely different beast. I think that is truly impossible. Why? Partly because it is kind of like asking an AI to undo milk from coffee. How is that even possible? You see, if you wish to remove the fog, none of the previous techniques are really capable of doing that. No wonder. I mean, adding rain or snow to already existing footage. Okay, you need to change some pixels around. Tough, but possible. But when you remove the fog, new things appear. Oh my, you would have to synthesize and fill in new information. What could be in the background? Well, you have to put something there that makes sense. And for that, you need to not just draw a few pixels. No, no. For that, you need to understand the world. So, can the new technique do it? Now hold on to your papers, fellow scholars. And oh my goodness, it did it. And it not only did it, but when you try rain removal, previous techniques are either not really doing anything or the only one that does in return also changes the whole scene. We did not have colorful cars in the input footage at all. And the new one once again, incredibly amazing. Same with snow. But now check this out because it gets even better. Dear fellow scholars, this is two minute papers with Dr. Kohaa. Yes, no one said that fog should be turned off like a light switch. Sometimes you want a little of it. Not a problem. You get a little slider that you can play with. In the case of fog, you can change the fog density. And for snow, oh, I love this one. In the case of snow, you can play with the amount of coverage that you wish to see there. And a bonus, paddle coverage. H. Now, let's stop for a second there. What puddles? Don't forget, puddles make the roads more specular, which means that they are reflective. But to become reflective, they have to reflect something. And once again, this is something that cannot just come from thin air. This is something that the AI needs to synthesize. You see, I am a light transport researcher by trade, and I tried to resist the urge to mention specular reflections, and of course, I failed again. So, this is how this amazing weather transition video was made in the intro. And here is the craziest thing about this work. It kind of teaches itself. Originally, it performed only weather removal. They used it to create pairs of the same

### Segment 2 (05:00 - 08:00) [5:00]

scenes, one with rain and one without. Then they use this data to train a weather synthesis AI model. So in a way it kind of teaches itself. I think self-supervised bootstrapping would be a good term to call this behavior absolutely mind-blowing. Now just imagine combining the weather synthesis with this amazing technique. Yep, this is the legend of the apple that disappeared into another dimension. Okay, what is going on here? The problem here is that we have an input image. And what would we like to extract from it? Everything. Absolutely everything. Geometry information, depth information, material properties, everything. And then use that to rerender this image in a different way, perhaps with different lighting. little alternative realities. So cool. However, not so fast when a previous technique looks at this apple and we are asking it to reconstruct the scene and rotate it. O I am seeing bad news already. Look, it underestimated the shadow in the scene. And when we rotate it, oh goodness, that's like a sandwich in a shady shop. From the outside, it looks amazing. When you look into it, you get a black hole. Nothing. Now, let's see the new technique. Oh my goodness. It pulled it off. Proper inverse rendering right there. And once again, not just on one scene, but on more lifelike environments, too. But the point is that if this is a proper AI inverse renderer, then it can do so much more. For instance, you can not only relight the scenes or look at them from a new viewpoint. No, get this. You can also edit the materials and the results are still photorealistic. Like editing reality itself. Crazy. And you can do this too. Wait a second. I hear you asking, Caro, what happened here? Well, not all of the objects in this scene are real. This was inserted by this new technique. Same here. You can maybe kind of tell if you look closely. But one more paper down the line from here. Not a chance. Another impossible problem solved with a new research paper. And while we look at these beautiful results, this is two minute papers. Elsewhere you get McDonald's. Here you get the papers. Proper research papers from a scientist. Hope you are enjoying it. as much as I am enjoying it. Here you see me running the full Deepseek AI model through Lambda GPU cloud. 671 billion parameters running super fast and super reliably. This is insane. I love it and I use it on a regular basis. Lambda provides you with powerful Nvidia GPUs to run your own chatbots and experiments. Seriously, try it out now at lambda. ai. AI/papers or click the link in the description.
