This AI Gave Elon Musk A Majestic Beard! 🧔
7:25

This AI Gave Elon Musk A Majestic Beard! 🧔

Two Minute Papers 05.01.2021 133 122 просмотров 9 462 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Debug-Compare-Reproduce-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly 📝 The paper "StyleFlow: Attribute-conditioned Exploration of StyleGAN-generated Images using Conditional Continuous Normalizing Flows" is available here. ⚠️ The source code is now also available! https://rameenabdal.github.io/StyleFlow/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #elonmusk #styleflow

Оглавление (4 сегментов)

<Untitled Chapter 1>

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Here you see people that don’t exist. They don’t exist because these images were created with a neural network-based learning method by the name StyleGAN2, which can not only create eye-poppingly detailed looking images, but it can also fuse these people together, or generate cars, churches, horses, and of course, cats. This is quite convincing so is there anything else to do in human face generation? Are we done? Well, this footage from a new paper may give some of it away. If you have been watching this series for a while, you know that of course, researchers always find a way to make these techniques better, we always say, two more papers down the line and it will be way better. And now, we are one more paper down the line, so let’s see together if there has been any improvement. This new technique is based on StyleGAN2 and is called StyleFlow, and it can take an input photo of a test subject, and edit a number of meaningful parameters. Age, expression, lighting, pose, you name it. Now note that there were other techniques that could pull this off, but the key improvements here is that one, we can perform many sequential changes, and two, is does all this while remaining faithful to the original photo. And hold on to your papers, because three, it can also help Elon Musk to grow a majestic beard. Believe it or not, you will also see a run of this algorithm on me as well at the end of the video. First, let’s adjust the lighting a little, and now, it’s time for that beard. Loving it. Now, a little aging, and one more beard. It seems to me that this beard is different from the young man beard, which is nice attention to detail. And note that we have strayed very far from the input image, but if you look at the intermediate states, you see that the essence of the test subject is indeed quite similar. This Elon is still Elon. Note that these results are not publicly available and were made specifically for us, so you can only see this here on Two Minute Papers. That is quite an honor, so thank you very much to Rameen Abdal, the first author of the paper for being so kind. Now, another key improvement in this work is that we can change one of these parameters with little effect on anything else. Have a look at this workflow and see how well we can perform these sequential edits. These are the source images, and the labels showcase the one variable change for each subsequent image, and you can see how amazingly surgical the changes are. Witchcraft. If it says that we change the facial hair, that’s the only change I see in the output. And just think about the fact that these starting images are real photos, but the outputs of the algorithm are made-up people that don’t exist. And notice that the background is also mostly left untouched. Why does that matter? You will see in the next example. So far, so good, but this method can do way more than this.

Edit: Color

Now let’s edit, not people, but cars. I love how well the color and pose variations work. Now, if you look here, you see that there is quite a bit of collateral damage, as not

Edit: Rotation

only the cars, but the background is also changing, opening up the door for a potentially more surgical followup paper. Make sure to subscribe and hit the bell icon to get notified when we cover this one more paper down the line.

Edit: SUV Conversion

And now, here is what you have been waiting for, we’ll get hands-on experience with this technique, where I shall be the subject of the next experiment. First, the algorithm is looking for high-resolution frontal images, so for instance, this would not work at all, we would have to look for a different image. No matter, here’s another one where I got locked up for reading too many papers. This could be used as an input for the algorithm, and, look. This image looks a little different. Why is that? It is because StyleGAN2 runs an embedding operation on the photo before starting its work. This is an interesting detail that we only see if we start using the technique ourselves. And now, come on, give me that beard. Ho-ho-hoo! Loving it! What do you think, whose beard is better, Elon’s or mine? Let me know in the comments below. And now, please meet old man Károly, he will tell you that papers were way better back in his day, and the usual transformations. But also note that as a limitation, we had quite a bit of collateral damage for the background. This was quite an experience. Thank you so much. And we are not done yet because this paper just keeps on giving. It can also perform attribute transfer. What this means is that we have two input photos, this will be the source image, and we can choose a bunch of parameters that we would like to extract from it. For instance, the lighting and pose can be extracted through attribute transfer, and it seems to even estimate the age of the target subject and change the source to match it better. Loving it. The source code for this technique is also available and make sure to have a look at the paper, it is very thoroughly evaluated, the authors went the extra mile there and it really shows. And I hope that you now agree, even if there is a technique that appears quite mature, researchers always find a way to further improve it. And who knew, it took a competent AI to get me to grow a beard. Totally worth it. What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник