# NVIDIA’s New AI: Growing Worlds From Nothing!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=WFG-VGZIwUo
- **Дата:** 16.06.2025
- **Длительность:** 6:13
- **Просмотры:** 52,655

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambda.ai/papers

Guide for using DeepSeek on Lambda:
https://docs.lambdalabs.com/education/large-language-models/deepseek-r1-ollama/?utm_source=two-minute-papers&utm_campaign=relevant-videos&utm_medium=video

📝 The papers are available here:
https://research.nvidia.com/labs/toronto-ai/stochastic-preconditioning/
https://zju3dv.github.io/freetimegs/

Play with it (interactive viewer): https://www.4dv.ai/viewer/salmon_10s?showdemo=4dv

📝 My paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Benji Rabhan, B Shang, Christian Ahlin, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Michael Tedder, Owen Skarpness, Richard Sundvall, Steef, Sven Pfiffner, Taras Bobrovytsky, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

My research: https://cg.tuwien.ac.at/~zsolnai/
X/Twitter: https://twitter.com/twominutepapers
Thumbnail design: Felícia Zsolnai-Fehér - http://felicia.hu

#nvidia

## Содержание

### [0:00](https://www.youtube.com/watch?v=WFG-VGZIwUo) Segment 1 (00:00 - 05:00)

Imagine taking just a few photos and  having a computer generate a perfect,   explorable 3D world. Fantastic for video games and   for training self-driving cars. That's  the incredible promise of neural fields. But... that promise often hits a snag.   The training process frequently gets   stuck in bad spots, leaving us with  blurry results, lumpy surfaces,   or weird 'floating' artifacts in the scene.   Not quite the digital worlds we hoped for. Now, what if a surprisingly simple tweak  during training could cut through that?    This work introduces a clever, almost elegant,  way to help these powerful models avoid those   pitfalls, leading to significantly sharper  reconstructions and fewer pesky floaters. And in a moment, we’ll look at another technique  - one that brings motion to these scenes,   so they’re not just still images anymore,  but living, moving worlds we can step into.    This one is something you can play  with right now, I’ll show you how! First, the noise paper. Let's see how they   do it. Dear Fellow Scholars, this is Two  Minute Papers with Dr. Károly Zsolnai-Fehér. Umm…are you kidding me? You just  add some noise during training,   let it fade out over time and that’s it? So, kind  of like adding fog over a beautiful landscape,   and then, over time, making it disappear  would somehow make it better? Would adding   chaos lead to order later? Well,  I will believe that when I see it. Let’s have a look by growing an armadillo out of  nothing. A previous technique starts out well, but   unfortunately, we get extra floating artifacts. A  neck pillow, and more. Now, the new method starts   out quite jumpy, hmm…I am not sure about that,  and…oh my, look at that. It stabilizes quickly,   and then, we get our armadillo, but  without the problem parts. Loving this. Same, when growing a bunny. So  far so good. But it gets better! Now let’s try to create real geometry from  a 3D point cloud. Oh yes. The Sibenik castle   look alright even with a previous method, until…oh  goodness. The authors refer to this as “disastrous   artifacts”, and I think the name is apt. Okay, but  can their method create a better reconstruction? Oh yes, yes it can. Finally, the flat  parts of the geometry are truly flat,   and look! The disaster has  been averted. What a relief! And we will have a look at this other work too  in a moment that is about movement in these   virtual worlds. So we can appear not just as a  still image, but really inhabit these worlds. Now, when using neural radiance fields to build 3D  scenes, due to the neural network training process   getting stuck, we get these really annoying  floating artifacts. This is not usable. So, does   training with a bit more noise help this problem? I can’t believe it. These results are not perfect,   but they are so much cleaner.   This is a huge step forward. And, I love how it can grow a better chair  and hot dog than previous methods. And the   best part? This trick works on practically any  type of neural field you throw it at. Seriously,   it’s nearly as simple as just  adding some noise during training. And now, this one is from a different  research group. Okay, so getting clean   static scenes is fantastic, but what about  motion? Real life moves, sometimes wildly! This one goes a step further - it renders scenes  in motion using Gaussian Splats. It teaches these   tiny little Gaussian blobs that build up the scene  to dance to their own little animation scripts. The result? Complex motions that were  previously hard to handle - people walking,   adorable fluffballs wagging their  tails - these suddenly run in real   time and in higher quality. And yes, they  made an interactive viewer for it. You’ll   notice that the dog might look a bit like a  collection of little lumps or brush strokes,   but the way it moves through  the scene? Absolutely beautiful. And hold on to your papers Fellow Scholars,  and look at that. More than 450 frames per   second. Goodness! Yes, it can do all this up  to 7 times faster than previous techniques,   with equivalent, or even better quality.   The reason for that is that most previous   methods twist the whole scene to simulate  motion. This one? It just lets each blob   move on its own. Imagine bending a whole  puppet just to move one arm. That’s what

### [5:00](https://www.youtube.com/watch?v=WFG-VGZIwUo&t=300s) Segment 2 (05:00 - 06:00)

older methods do. This one says let’s just  move the arm instead. Nothing else. And the   quality remains equivalent or even better. That  is absolutely incredible. What a time to be alive! Make sure to play with its interactive  viewer in the video description. So, real-time virtual worlds  - not just for film studios,   but for all of us. Imagine filming your dog,  and within minutes, taking it for a walk in 3D   in a virtual wonderland. And yes, that future  is getting closer super fast. Loving this.

---
*Источник: https://ekstraktznaniy.ru/video/12306*