# Light Fields - Videos From The Future! 📸

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=9XM5-CJzrU0
- **Дата:** 12.01.2021
- **Длительность:** 5:12
- **Просмотры:** 136,565

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

📝 The paper "Immersive Light Field Video with a Layered Mesh Representation" is available here:
https://augmentedperception.github.io/deepviewvideo/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh.
If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

#lightfields

## Содержание

### [0:00](https://www.youtube.com/watch?v=9XM5-CJzrU0) Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Whenever we take a photo, we capture a  piece of reality from one viewpoint. Or,   if we have multiple cameras on our smartphone, a  few viewpoints at most. In an earlier episode, we   explored how to upgrade these to 3D photos, where  we could kind of look behind the person.    I am saying kind of, because what we see  here is not reality, this is statistical data   that is filled in by an algorithm to match  its surroundings, which we refer to as image   inpainting. So strictly speaking, it is likely  information, but not necessarily true information,   and also, we can recognize the synthetic parts  of the image as they are significantly blurrier.    So the question naturally arises in the mind  of the curious Scholar, how about actually   looking behind the person. Is that somehow  possible or is that still science fiction? Well, hold on to your papers,  because this technique shows us   the images of the future by sticking a  bunch of cameras onto a spherical shell,   and when we capture a video, it will see…something  like this. And the goal is to untangle this mess   and we’re not done yet, we also need to  reconstruct the geometry of the scene   as if the video was captured from many different  viewpoints at the same time. Absolutely amazing.    And yes, this means that we can change  our viewpoint while the video is running. Since it is doing the reconstruction in layers,  we know how far each object is in these scenes,   enabling us to rotate these sparks and  flames and look at them in 3D. Yum. Now, I am a light transport researcher by  trade, so I hope you can tell that I am very   happy about these beautiful volumetric  effects, but I would also love to know   how it deals with reflective surfaces. Let’s see  together…look at the reflections in the sand here,   and I’ll add a lot of camera movement and  wow…this thing works. It really works,   and it does not break a sweat even if  we try a more reflective surface…or   an even more reflective surface! This  is as reflective as it gets I’m afraid,   and we still get a consistent and  crisp image in the mirror. Bravo! Alright, let’s get a little more  greedy, what about seeing through   thin fences…that is quite a challenge.   And…look at the tail wags there. This is still   a touch blurrier here and there,  but overall, very impressive. So what do we do with a video like this?   Well, we can use our mouse to look around   within the photo in our web browser,  you can try this yourself right now   by clicking on the paper in the video description.   Make sure to follow the instructions if you do.    Or we can make the viewing experience even more  immersive with a head-mounted display, where,   of course, the image will follow wherever  we turn our head. Both of these truly feel   like entering a photograph and getting  a feel of the room therein. Loving it. Now, since there is a lot of information in these  Light Field Videos, it also needs a powerful   internet connection to relay them. Even when using  H. 265, a powerful video compression standard,   we are talking in the order of hundreds of  megabits. It is like streaming several videos in   4k resolution at the same time. Compression helps,  however, we also have to make sure that we don’t   compress too much, so that compression artifacts  don’t eat the content behind thin geometry,   or at least, not too much. I bet this will  be an interesting topic for a followup paper,   so make sure to subscribe and hit the  bell icon to not miss it when it appears.    And for now, more practical light field photos and  videos will be available that allow us to almost   feel like we are really in the room with the  subjects of the videos. What a time to be alive!

### [5:00](https://www.youtube.com/watch?v=9XM5-CJzrU0&t=300s) Segment 2 (05:00 - 05:00)

Thanks for watching and for your generous  support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/14002*