# NVIDIA’s New AI Grows Objects Out Of Nothing! 🤖

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=5j8I7V6blqM
- **Дата:** 18.05.2022
- **Длительность:** 6:33
- **Просмотры:** 384,504
- **Источник:** https://ekstraktznaniy.ru/video/13563

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 
❤️ Their mentioned post is available here (thank you Soumik!): http://wandb.me/3d-inverse-rendering

📝 The paper "Extracting Triangular 3D Models, Materials, and Lighting From Images" is available here:
https://research.nvidia.com/publication/2021-11_Extracting-Triangular-3D
https://nvlabs.github.io/nvdiffrec/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, 

## Транскрипт

### Intro []

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to see how NVIDIA’s new AI transfers real objects into a virtual world. So, what is going on here?

### What is it [0:14]

Simple, in goes just one image, or a set of images of an object. And the result is that an AI really transfers this real object into a virtual world almost immediately. Now that sounds like science fiction, how is that even possible? Well, with this earlier work, it was possible to take a target geometry from somewhere, and obtain a digital version of it by growing it out of nothing. This work reconstructed its geometry really well. But, geometry only. This other work tried to reconstruct not just the geometry, but everything. For instance, the material models too. Now, incredible as this work is, it is still baby steps in this area. As you see, both the geometry and the materials are still quite coarse.

### Can it be improved [1:17]

So, is that it then? Is the transferring real objects into virtual worlds dream dead? It seems so. Why? Because we either have to throw out the materials to get a high-quality result, or if we wish to get everything, we have to be okay with a coarse result. But, I wonder, can this be improved somehow? Well, let’s find out together. And, here it is! NVIDIA’s new work tries to take the best of both worlds. What does that mean? Well, they promise to reconstruct absolutely everything. Geometry, materials, and even the lighting setup, and all of this with high fidelity. Well, that sounds absolutely amazing, but, I will believe it when I see it!

### Lets see it [2:10]

Let’s see together. Well, that’s not quite what we are looking for, is it. Yes, this isn’t great, but, this is just the start. Now, hold on to your papers and marvel at how the AI improves this result over time. Oh yes, this is getting better. And, my goodness! After just as little as two minutes, we already have a usable model. That is so cool! I love it. We go on a quick bathroom break and the AI does all the hard work for us. Absolutely amazing. And, it gets even better! Well, if we are okay with not a quick bathroom break, but taking a nap, we get this just an hour later. And if that is at all possible, it gets even better than that! How is it possible?

### How is it possible [3:10]

Well, imagine that we have a bunch of photos of a historical artifact, and, you know what’s coming! Of course, creating a virtual version of it, and dropping it into a physics simulation engine, where we can edit this material, or embed it into a cloth simulation. How cool is that? And, I can’t believe it! It still doesn’t stop there. We can even change the lighting around it and see what it would look like in all its glory. That is absolutely beautiful. Loving it. And, if we have a hot dog somewhere, and we already created a virtual version of it, but now, what do we do with it? Of course, we engage in the favorite pastime of the computer graphics researcher. That is, throwing jelly boxes at it. And, with this new technique, you can do that too. And, even better, we can take an already existing solid object and reimagine it as if it were made of jelly. No problem at all. And, you know what - it is final boss time. Let’s not just reconstruct an object. Why not throw an entire scene at the AI. See if it buckles. Can it deal with that? Let’s see.

### Final thoughts [4:40]

And…I cannot believe what I am seeing here. It resembles the original reference scene so well, even when animated, that it is almost impossible to find any differences. Have you found any? I have to say I doubt that, because I have swapped the labels. Oh yes, this is not the reconstruction, this is the real reconstruction. This will be an absolutely incredible tool in democratizing creating virtual worlds and giving it into the hands of everyone. Bravo NVIDIA. So, what do you think? Does this get your mind going? What else could this be useful for? What do you expect to happen a couple more papers down the line? Please let me know in the comments below. I’d love to hear your thoughts. Thanks for watching and for your generous support, and I'll see you next time!
