# Photos Go In, Reality Comes Out…And Fast! 🌁

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=yptwRRpPEBM
- **Дата:** 11.01.2022
- **Длительность:** 4:56
- **Просмотры:** 108,205

## Описание

❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers

📝 The paper "Plenoxels: Radiance Fields without Neural Networks" is available here:
https://alexyu.net/plenoxels/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

#plenoxels

## Содержание

### [0:00](https://www.youtube.com/watch?v=yptwRRpPEBM) Segment 1 (00:00 - 04:00)

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going take a collection of photos like these, and magically, create a video where we can fly through these photos. And it gets better, we will be able to do it quickly, and, get this, no AI is required. So, how is this even possible? Especially that the input is only a handful of photos. Well, typically, we give it to a learning algorithm and ask it to synthesize a photorealistic video where we fly through the scene as we please. Of course, that sounds impossible. Especially that some information is given about the scene, but this is really not much. And as you see, this is not impossible at all - through the power of learning-based techniques, this previous AI is already capable of pulling off this amazing trick. And today, I am going to show you that through this incredible new paper, that something like this can even be done at home, on our own machines. Now, the previously showcased technique, and its predecessors are building on gathering training data and training a neural network to pull this off. Here you see the training process of one of them compared to the reference results. Well, it looks like we need to be really patient as this process is quite lengthy, and for the majority of the time, we don’t get any usable results for nearly a day into this process. Now, hold on to your papers, because here comes the twist. What is the twist? Well, these are not reference results. No-no. These are the results from the new technique. Yes, you heard it right. It doesn’t require a neural network, and thus trains so quickly, that it almost immediately looks like the final result, while the original technique is still unable to produce anything usable. That is absolutely insane. Okay, so it’s quick. Real quick. But how good are the results? Well, the previous technique was able to produce this after approximately 1. 5 days of training. And, what about the new technique? All it needs is 8. 8. 8. 8 what? Days? Hours? No-no. 8. 8 minutes. And the result looks like this. Not only as good, but even a bit better than what the previous method could do in 1. 5 days. Whoa. So, I mentioned that the results are typically even a bit better, which is quite remarkable. Let’s take a closer look. This is the previous technique after more than a day, this is the new method after 18 minutes. Now, it says that the new technique is 0. 3 decibels better, that does not sound like much, does it? Well, note that the decibel scale is not linear, it is logarithmic. What does this mean? It means this! Look. A small numerical difference in the numbers can mean a big difference in quality. And, it is really close to the real results. And all this after 20 minutes of processing? Bravo! And, it does not stop there. The technique is also quite robust. It works well on forward-facing scenes, 360 degree rotations, and we can even use it to disassemble a scene into its foreground and background elements. Note that the previous NERF technique we compared to was published just about a year and a half ago. Such incredible improvement in so little time. And here comes the kicker: all this is possible today with a handcrafted technique, no AI is required. What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13703*