# NVIDIA’s New AI: Nature Videos Will Never Be The Same!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=r2zv3sNsnqo
- **Дата:** 01.02.2023
- **Длительность:** 7:20
- **Просмотры:** 144,926

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

📝 The paper "Disentangling Random and Cyclic Effects in Time-Lapse Sequences" is available here:
https://arxiv.org/abs/2207.01413
https://github.com/harskish/tlgan

My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Edward Unthank, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

#nvidia

## Содержание

### [0:00](https://www.youtube.com/watch?v=r2zv3sNsnqo) Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today we are going to use NVIDIA’s  new AI to create absolutely magical   time lapse videos that were otherwise  nearly impossible to make. Until now. Watching a video of a landscape in real time  is a smooth experience, but not as interesting   as seemingly, nothing is happening. Now,  speeding them up creates us a time lapse video,   this is finally interesting, however, then this  happens. Oh yes, we get too much flickering.    This is not for human consumption, that’s for  sure. If only an AI-based algorithm existed that   could somehow take the essence of the changes  in this video, but make it smooth as well. Now, of course that sounds flat out impossible.   Why? Well, in these time lapse videos, we   experience drastic weather changes within a day,  photos for some days are missing, and even worse,   occlusions might happen. Now, let’s have a  look at a previous method trying to do that.    Hmm. Something is happening here, it is  able to keep the weather conditions stable,   so we get no more flickering. However, do you  see it? Growing is clearly happening, but the   details have been washed out so much that the  most interesting part of the video is now gone. I am not that hopeful that this is  possible to solve well, but just in case,   let’s have a look at NVIDIA’s new AI and see  what it is made of. This is the input, and,   wow! The new method does an absolutely outstanding  job at keeping the weather conditions stable,   but in the meantime, also showing us  sharp footage of this amazing growth. Now let’s compare it to the previous method. And,   my goodness, there is no contest  here. So much better. Loving it! Let’s have a look at another example of this  valley and then, I will show you that this   technique also has some more superpowers.   As expected, the input video has way too   much going on, so now, let’s ask the new  method to boil it down to its essence.    That is absolutely incredible. Look,  smooth and detailed at the same time. And yes, I promised some  superpowers, so what are those? This technique can not only stabilize the weather  conditions and create smooth and detailed videos,   but, one, depending on our artistic goals, it even  gives us the tools to control the time of day.    And if we wish to think bigger, we  can also control the time of year. But all that is nothing compared to my favorite,  so hold on to your papers for trends. If there   are structural changes in a landscape  or a city, for instance, at some point,   this building has been repainted, and  now we can decide to forget about it,   and keep it as if it never even  happened, or we can even decide to   use its new appearance for the entirety of  the year. Even before it was painted. Wow! And we can also account for growth. This  growth can take place as the video progresses,   or it can be fixed in time.   This is so good, I love it. And now, time for some interesting  tidbits. This technique is based on   the legendary StyleGAN system, which is  able to synthesize these photorealistic   humans and other things in a  somewhat controllable manner. Also, I bet the authors had a few conversations  asking each other “how on Earth do we put all   this video information into one stationary  image in the paper”, this concept is a little   harder to convey there, so I am so happy to  be able to show you these videos instead. And what I also loved is this. Look. Are you  seeing what I am seeing? My goodness. Some   of these datasets have not only a few days  missing here and there, there are often a   series of tens of days missing, or even worse,  and the new AI can still create a smooth video   of the changes as if nothing happened. It  synthesizes what happened in between. And,

### [5:00](https://www.youtube.com/watch?v=r2zv3sNsnqo&t=300s) Segment 2 (05:00 - 07:00)

there is something that is even worse. Look.   The reality of taking these photos is that the   location of the camera can change over the year,  making this barn appear to be bending upwards.    The new AI is can fix that too. Not a problem  at all. It is absolutely incredible that humans   are able to build AI-based algorithms that are  this capable today. What a time to be alive! Now, not even this technique is  perfect, you see here that for instance,   the chairs are still moving around.   But have a look at how much progress   has been made just one more paper down  the line. My mind is officially blown. And, good news! The source code and pre-trained  models are also available, so anyone who has   the available resources and expertise can start  using it right now. And given NVIDIA’s excellent   track record in tech transfer, this might make  an introduction in one of their products soon. So, what do you think? What would you use  this for? Let me know in the comments below! Thanks for watching and for your generous  support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13302*