Neural Portrait Relighting is Here!
3:56

Neural Portrait Relighting is Here!

Two Minute Papers 15.02.2020 75 898 просмотров 3 157 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers Their blog post and example project are available here: - https://www.wandb.com/articles/exploring-gradients - https://colab.research.google.com/drive/1bsoWY8g0DkxAzVEXRigrdqRZlq44QwmQ 📝 The paper "Deep Single Image Portrait Relighting" is available here: https://zhhoper.github.io/dpr.html ☀️ Our "Separable Subsurface Scattering" paper with source code is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/

Оглавление (1 сегментов)

Segment 1 (00:00 - 03:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In computer graphics, when we are talking about portrait relighting, we mean a technique that is able to look at an image and change the lighting, and maybe even the materials or geometry after this image has been taken. This is a very challenging endeavor. So can neural networks put a dent into this problem and give us something new and better? You bet! The examples that you see here are done with this new work that uses a learning-based technique, and is able to change the lighting for human portraits, and only requires one input image. You see, normally, using methods in computer graphics to relight these images would require trying to find out what the geometry of the face, materials, and lighting is from the image, and then, we can change the lighting or other parameters, run a light simulation program, and hope that the estimations are good enough to make it realistic. However, if we wish to use neural networks to learn the concept of portrait relighting, of course, we need quite a bit of training data. Since this is not trivially available, the paper contains a new dataset with over 25 thousand portrait images that are relit in 5 different ways. It also proposes a neural network structure that can learn this relighting operation efficiently. It is shaped a bit like an hourglass and contains an encoder and decoder parts. The encoder part takes an image as an input and estimates what lighting could have been used to produce it, while the decoder part is where we can play around with changing the lighting, and it will generate the appropriate image that this kind of lighting would produce. However, there is more to it. What you see here are skip connections that are useful to save insights from different abstraction levels and transfer them from the encoder to the decoder network. So what does this mean exactly? Intuitively, it is a bit like using the lighting estimator network to teach the image generator what it has learned. So, do we really lose a lot if we skip the skip connections? Well, quite a bit, have a look here. The image on the left shows the result using all skip connections, while as we traverse to the right, we see the results omitting them. These connections indeed make a profound difference. Let’s be thankful for the authors of the paper as putting together such a dataset and trying to get an understanding as to what network architectures it would require to get great results like these takes quite a bit of work. However, I’d like to make a note about modeling subsurface light transport. This is a piece of footage from our earlier paper that we wrote as a collaboration with the Activision Blizzard company, and you can see here that including this indeed makes a profound difference in the looks of a human face. I cannot wait to see some followup papers that take more advanced effects like this into consideration for relighting as well. If you wish to find out more about this work, make sure to click the link in the video description. This episode has been supported by Weights & Biases. Here you see a write-up of theirs where they explain how to visualize the gradients running through your models, and illustrate it through the example of predicting protein structure. They also have a live example that you can try! Weights & Biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. Make sure to visit them through wandb. com/papers or just click the link in the video description and you can get a free demo today. Our thanks to Weights & Biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник