# This Magical AI Makes Your Photos Move! 🤳

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=BpApq2EPDXE
- **Дата:** 13.07.2021
- **Длительность:** 5:59
- **Просмотры:** 132,963
- **Источник:** https://ekstraktznaniy.ru/video/13873

## Описание

❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd 

📝 The paper "Endless Loops: Detecting and Animating Periodic Patterns in Still Images" and the app are available here:
https://pub.res.lightricks.com/endless-loops/

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background image credit - Pascal Wiemers: https://pixabay.

## Транскрипт

### <Untitled Chapter 1> []

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Look at this video of a moving escalator.   Nothing too crazy going on, only the escalator   is moving. And I am wondering, would it be  possible to not record a video for this,   just an image, and have one of these amazing  new learning-based algorithms animate it? Well, that is easier said than done. Look, this  is what was possible with a research work from   2 years ago, but the results are…well, what you  see here. So how about a method from one year ago?    This is the result. A great deal of improvement  - the water is not animated in this region,   and is generally all around the place, and we  still have a lot of artifacts around the fence.    And now, hold on to your papers, and  let’s see this new method…and…whoa!

### New method [0:58]

Look at that! What an improvement! Apparently, we can now give this one a still   image, and for the things that  should move, it makes them move.    It is still not perfect by any means, but  this is so much progress in just two years. And there’s more! Get this, for the things  that shouldn’t move, it even imagines   how they should move. It works  on this building really well.    But it also imagines how my tie would move around.   Or my beard, which is not mine, by the way,   but was made by a different AI, or the windows.   Thank you very much for the authors of the paper   for generating these results only for us! And  this can lead to really cool artistic effects,   for instance, this moving brickwall, or  animating the stairs here. Loving it. So, how does this work exactly? Does  it know what regions to animate?    No it doesn’t, and it shouldn’t. We can  specify that ourselves by using a brush   to highlight the region that we wish to see  animated, and we also have a little more   artistic control over the results by prescribing  a direction in which things should go. And it appears to work on a really wide  variety of images, which is only one   of its most appealing features. Here  are some of my favorite results,   I particularly love the one with the  apple rotation here. Very impressive. Now, let’s compare it to the previous method from  just one year ago, and let’s see what the numbers   say. Well, they say that the previous  one performs better on fluid elements   than the new one. My experience is that it  indeed works better on specialized cases,   like this fire texture, but on many water  images, they perform roughly equivalently.    Both are doing quite well. So,  is the new one really better? Well, here comes the interesting part. When  presented with a diverse set of images,   look. There is no contest here - the previous  one creates no results, incorrect results,   or if it does something, the new technique  almost always comes out way better. Not only that, but let’s see what the  execution time looks like for the new method.    How much do we have to wait  for these results? The one from   last year took 20 seconds per image and  required a big honking graphics card,   while the new one only needs your smartphone,  and runs in… what? Just one second. Loving it. So, what images would you try with this  one? Let me know in the comments. Well,

### Extensions [4:02]

in fact, you don’t need to just think about  what you would try, because you can try this   yourself. It has a mobile app, the link  is available in the video description,   make sure to let me know in the comments  below if you had some success with it! Here comes the even more interesting part.   The previous method was using a learning-based   algorithm, while this one is a bona-fide,  almost completely handcrafted technique.    Partly because training neural networks  requires a great deal of training data,   and there are very few, if any training examples  for moving buildings and these other surreal   phenomena. Ingenious. Huge congratulations  to the authors for pulling this off! Now, of course, not even this technique is  perfect, there are still cases where it does

### Limitations [4:51]

not create appealing results, however,  since it only takes a second to compute,   we can easily retry with a different pixel mask  or direction and see if it does better. And just   imagine what we will be able to do two more  papers down the line. What a time to be alive!

### Gradient Dissent Guest: Peter Welinder - OpenAI [5:22]

Thanks for watching and for your generous  support, and I'll see you next time!
