# Google’s New AI: Flying Through Virtual Worlds! 🕊️

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=N-Pf9lCFi4E
- **Дата:** 29.05.2022
- **Длительность:** 5:31
- **Просмотры:** 124,879
- **Источник:** https://ekstraktznaniy.ru/video/13554

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 
❤️ Their mentioned post is available here (thank you Soumik!): http://wandb.me/Mip-NeRF

📝 The paper "Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields" is available here:
https://jonbarron.info/mipnerf360/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil 

## Транскрипт

### Intro []

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going take a collection of photos like these, and magically, create a video where we can fly through these photos. So, how is this even possible? Especially that the input is only a handful of photos. Well, typically, we give it to a learning algorithm and ask it to synthesize a photorealistic video where we fly through the scene as we please. Of course, that sounds impossible. Especially that some information is given about the scene, but this is really not much. And as you see, this is not impossible at all - through the power of learning-based techniques, this previous AI is already capable of pulling off this amazing trick. And today, I am going to show you that through this incredible new paper, with a little expertise, something like this can be done even at home, on our own machines.

### Un unbounded scenes [1:11]

Why? Because now, research scientists at Google and Harvard also published their take on this problem. And they promise two fantastic improvements: Improvement number one. Unbounded scenes. No more front-facing scene with a stationary camera. They say that now, we can rotate the camera around the object, and their technique will still work. That is a huge deal…if it indeed works.

### Real world [1:42]

We will see. You know what, hold on to your papers, and, let’s see together right now. Wow! This does not look like an AI-made video out of a bunch of photos. This looks like reality! My goodness. And, of course, you are experienced Fellow Scholars over there, so I bet you are immediately interested in the fidelity of the geometry, which truly is a sight to behold. I am a light transport researcher by trade, so I am additionally looking at the specular highlights. This is as close to reality as it gets. The depth maps it produces, which describe the distance of these objects from the camera, and, they are also silky smooth. Outstanding.

### Promises [2:34]

So, is this a one-off result, or is this a robust technique that works on a variety of scenes? Well, I bet you know the answer by now. But wait, we talked about two promises. Promise number one was the unbounded scenes with the moving camera. What is promise number two? Well, promise number two is free antialiasing. Ooh boy! This is a technique from computer graphics that helps us overcome these jagged edges that are usually present in lower resolution images. And, it really works so well, check out this comparison against a previous work by the

### Comparison [3:13]

name mip-NERF. And it is just so much better, the new method truly seems to be in a league of its own. We get smoother lines. And pretty much every material, and every piece of the geometry comes out better. And note that this previous method is not some ancient technique. No-no! mip-NERF is a technique from just a year ago. Bravo Google and Bravo Harvard! And remember, all this comes from just a quick camera scan. And the rest of the details is learned and filled in by an AI. And this technique now truly has a remarkable understanding of the world around us.

### Conclusion [3:59]

So, just using a commodity camera, walking around a scene, and creating a digital video game version of it? Absolutely incredible. Sign me up, right now! So, what do you think? What else could this be useful for? What do you expect to happen a couple more papers down the line? Please let me know in the comments below. I’d love to hear

### Outro [4:29]

your thoughts. Thanks for watching and for your generous support, and I'll see you next time!
