# This AI Captures Your Hair Geometry...From Just One Photo! 👩‍🦱

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=slJI5r9rltI
- **Дата:** 27.11.2019
- **Длительность:** 4:19
- **Просмотры:** 77,339

## Описание

❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers

📝 Links to the paper "Dynamic Hair Modeling from Monocular Videos using Deep Neural Networks" are available here:
http://www.cad.zju.edu.cn/home/zyy/docs/dynamic_hair.pdf
http://www.kunzhou.net/2019/dynamic-hair-capture-sa19.pdf
https://www.youyizheng.net/research.html

❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

#GameDev

## Содержание

### [0:00](https://www.youtube.com/watch?v=slJI5r9rltI) Segment 1 (00:00 - 04:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In this series, we talk about research on all kinds of physics simulations, including fluids, collision physics, and we have even ventured into hair simulations. If you look here at this beautiful footage, you may be surprised to know how many moving parts a researcher has to get right to get something like this. For instance, some of these simulations have to go down to the level of computing the physics between individual hair strands. If it is done well, like what you see here from our earlier episode, these simulations will properly show us how things should move, but that’s not all - there is also an abundance of research works out there on how they should look. And even then, we’re not done, because before that, we have to take a step back and somehow create these digital 3D models that show us the geometry of these flamboyant hairstyles. Approximately 300 episodes ago, we talked about a technique that took a photograph as an input, and created a digital 3D model that we can use in our simulations and rendering systems. It had a really cool idea where it initially predicted a coarse result, and then, this result was matched with the hairstyles found in public data repositories, and the closest match was presented to us. Clearly, this often meant that we got something that was similar to the photograph, but often not exactly the hairstyle we were seeking. And now, hold on to your papers because this work introduces a learning-based framework that can create a full reconstruction by itself, without external help, and now, squeeze that paper, because it works not only for images, but for videos too! Woohoo! It works for shorter hairstyles, long hair, and even takes into consideration motion and external forces as well. The heart of the architecture behind this technique is this pair of neural networks, where the one above creates the predicted hair geometry for each frame, while the other tries to look backwards in the data and try to predict the appropriate motions that should be present. Interestingly, it only needs two consecutive frames to make these predictions, and adding more information does not seem to improve its results. That is very little data. Quite remarkable. Also, note that there are a lot of moving parts here in the full paper, for instance, this motion is first predicted in 2D, and is then extrapolated to 3D afterwards. Let’s have a look at this comparison - indeed, it seems to produce smoother and more appealing results than this older technique. But if we look here, this other method seems even better, so what about that? Well, this method had access to multiple views of the model, which is significantly more information than what this new technique has that only needs a simple monocular 2D video from our phone, or from the internet. The fact that they are even comparable is absolutely amazing. If you have a look at the paper, you will see that it even contains a hair growing component in this architecture. And as you see, the progress in computer graphics research is also absolutely amazing. And we are even being paid for doing this. Unreal. Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/14216*