Everybody Can Make Deepfakes Now!
6:55

Everybody Can Make Deepfakes Now!

Two Minute Papers 28.03.2020 1 201 627 просмотров 41 225 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers Their blog post is available here: https://www.wandb.com/articles/hyperparameter-tuning-as-easy-as-1-2-3 📝 The paper "First Order Motion Model for Image Animation" and its source code are available here: - Paper: https://aliaksandrsiarohin.github.io/first-order-model-website/ - Colab notebook: https://colab.research.google.com/github/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake

Оглавление (7 сегментов)

<Untitled Chapter 1>

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. It is important for you to know that everybody can make deepfakes now. You can turn your head around, mouth movements are looking great, and eye movements are also translated into the target footage. And, of course, as we always say, two more papers down the line, and it will be even better and cheaper than this. As you see, some papers are so well done, and are so clear, that they just speak for themselves. This is one of them. To use this technique, all you need to do is record a video of yourself, add just one image of the target subject, run this learning-based algorithm, and there you go. If you stay until the end of this video, you will see even more people introducing themselves as me. As noted, many important gestures are being translated, such as head, mouth and eye movement, but what’s even better, is that even full-body movement works. Absolutely incredible. Now, there are plenty of techniques out there that can create DeepFakes, many of which we have talked about in this series, so what sets this one apart? Well, one, most previous algorithms required additional information, for instance, facial landmarks or a pose estimation of the target subject. This one requires no knowledge of the image. As a result, this technique becomes so much more general. We can create high quality DeepFakes with just one photo of the target subject, make ourselves dance like a professional, and what’s more, hold on to your papers, because it also works on non-humanoid and cartoon models, and even that’s not all, we can even synthesize an animation of a robot arm by using another one as a driving sequence. So, why is it that it doesn’t need all this additional information? Well, if we look under the hood, we see that it is a neural-network based method that generates all this information by itself! It identifies what kind of movements and transformations are taking place in our driving video. You can see that the learned keypoints here follow the motion of the videos really well.

Learned Keypoints

Now, we pack up all this information, and send it over to the generator to warp the

Disentangling Appearance and Motion Generation

target image appropriately, taking into consideration possible occlusions that may occur. This means that some parts of the image may now be uncovered where we don’t know what the background should look like. Normally, we would do this by hand, with an image inpainting technique, for instance, you see the legendary PatchMatch algorithm here that does it, however, in this case, the neural network does it automatically, by itself! If you are seeking for flaws in the output, these will be important regions to look at. And it not only requires less information than previous techniques, but it also outperforms them…significantly. Yes, there is still room to improve this, for instance, the sudden head rotation here seems to generate an excessive amount of visual artifacts. The source code and even an example Colab notebook is available, I think it is one of

Animating Robots

the most accessible papers in this area. Don’t miss out and make sure to have a look in the video description, and try to run your own experiments!

Animating Humans

Let me know in the comments how they went or feel free to drop by at our discord server, where all of you Fellow Scholars are welcome to discuss ideas and learn together in a kind and respectful environment. The link is available in the video description, it is completely free, if you have joined, make sure to leave a short introduction! Now, of course, beyond the many amazing use-cases of this in reviving deceased actors, creating

Animating Stickers

beautiful visual art, redubbing movies, and more, unfortunately, there are people around the world who are rubbing their palms together in excitement to use this to their advantage. So, you may ask, why make these videos on DeepFakes? Why spread this knowledge, especially now, with the source codes? Well, I think step number one is to make sure to inform the public that these DeepFakes can now be created quickly and inexpensively, and they don’t require a trained scientist anymore. If this can be done, it is of utmost importance that we all know about it! Then, beyond that, step number two, as a service to the public, I attend to EU and NATO conferences, and inform key political and military decision makers about the existence and details of these techniques to make sure that they also know about these and using that knowledge, they can make better decisions for us. You see me doing it here. …and again, you see this technique in action here to demonstrate that it works really well for video footage in the wild. Note that these talks and consultations all happen free of charge, and if they keep inviting me, I’ll keep showing up to help with this in the future as a service to the public. The cool thing is that later, over dinner, they tend to come back to me with a summary of their understanding of the situation, and I highly appreciate the fact that they are open to what we, scientists have to say. And now, please enjoy the promised footage. Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. It is important for you to know that everybody can make deepfakes now. You can turn your head around, mouth movements are looking great, and eye movements are also translated into the target footage. And, of course, as we always say, two more papers down the line, and it will be even better and cheaper than this. This episode has been supported by Weights & Biases.

Hyperparameter Tuning As Easy As

Here, they show you how you can use Sweeps, their tool to search through high-dimensional parameter spaces and find the best performing model. Weights & Biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And, the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wandb. com/papers or just click the link in the video description and you can get a free demo today. Our thanks to Weights & Biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник