# New AI Makes Amazing DeepFakes In a Blink of an Eye!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=C9LDMzMRZv8
- **Дата:** 14.01.2023
- **Длительность:** 6:39
- **Просмотры:** 274,226
- **Источник:** https://ekstraktznaniy.ru/video/13330

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

📝 The paper "VToonify: Controllable High-Resolution Portrait Video Style Transfer" is available here:
https://www.mmlab-ntu.com/project/vtoonify/

Web demo:
https://huggingface.co/spaces/PKUWilliamYang/VToonify

Source code:
https://github.com/williamyang1991/VToonify

My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Edward Unthank, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen

## Транскрипт

### <Untitled Chapter 1> []

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today we are going to transform ourselves into  cartoon characters. And it is going to be amazing. But how? Well, for instance, let’s start out  from style transfer. Style transfer means   mixing two images together, one for content,  reimagined with the other one for style. This   also works for video, what’s more, there are also  computer graphics techniques that can update these   virtual worlds in real time as we mark up these  examples on a piece of paper. How cool is that! That is all well and good, but that’s nothing  compared to what you are going to see today,   because now, we are going to  run style transfer…on ourselves! That sounds great! But, wait, that’s not  exactly a new idea, previous techniques have

### Cartoon [0:59]

already tried that, so our question is, have  they done it well? Let’s have a look. Oh my,   these are not so good. And even the better ones  are not nearly ready for prime time yet. However,   hold on to your papers because we now have a new  technique, where again, in goes a video of us,   and a target style, and we get…this! Whoa.   These are so much better! So cool! And if

### Caricature [1:27]

only we could try it ourselves right now…maybe  that is possible, I’ll tell you in a moment. And it doesn’t stop there, this paper  has a ton more in the tank. For instance,   this slider is incredible, by  using it, we can tell the AI   how much should the style influence the  video. These results are going by quick,   so I’ll stop the process here and there so we can  have a little more time to have a look together. I particularly liked the fact that we have a  ton of control over the jawline and the eyes,   and of course, if we wish, these features  can get exaggerated a great deal,   or we can be a little more subtle with them. And  everything in between these two are also possible! And, have a look at this one too, this  is one of my favorite parts of the paper.    Are you seeing it? Well, look here. Our input  person has long hair, but the style reference   has short hair. And I love how the technique  reimagines the input person’s hair as well in the   style of the reference. It doesn’t even break a  sweat. This is such an amazing usability feature. And it supports a variety of  different styles. Just choose   the movie and the character of your  liking, and there we go. Loving it. And just think about the fact that a  couple papers before this one, in 2020,   this was possible. Even inpainting real  images of human faces was quite challenging,   and today, just a couple more papers down  the line, and we can do so much better,   with so much more artistic  control, and all this for video. So, let’s pop the question - how long  do we have to wait for such a result?    And this is where I fell off the chair when  reading the paper. Don’t blink. Why? Because   that’s exactly how long it takes for each  image. We are talking high-resolution images,   and we get 5 to 10 of those  every second. 5 to 10! Wow. I would absolutely love to see what the amazing  artists among you Fellow Scholars will be able   to do with this. This could make the job of  virtual actors for animation movies easier,   and I bet it will be a super fun tool  for videoconferencing with our friends   and beloved ones, and even putting our copies  into virtual worlds. What a time to be alive! Now wait, all that is well and good,  but when do we get to use it? Well,   I have two good news for you. Good news number  one, the source code of this project is available,   free of charge for everyone, and good news  number two, as of the making of this video,   you can also try it yourself online. The  links are available in the video description,   and make sure to read the instructions carefully.   Also note that the web app is a bit slower than   running it locally, but of course, it is  so much more convenient for most people. Now, not even this technique is perfect, as  for almost all DeepFake-related techniques,   teeth are usually a problem. And, oh yes, sure  enough, it is a problem here too. Look. But,   just think about how far we have come in just a  couple papers. And imagine what we will be able

### Pixar [5:08]

to do a couple more papers down the line.   My goodness! So this was a paper from the

### Arcane [5:13]

amazing SIGGRAPH ASIA conference, which  is one of the most prestigious venues in   computer graphics research. Having a paper  published there is perhaps equivalent of   the olympic gold medal for a computer graphics  researcher. Huge congratulations to the authors. So, what do you think? What would you use  this for? Let me know in the comments below! Thanks for watching and for your generous  support, and I'll see you next time!
