# This AI Learns Human Movement From Videos

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=AGm3hF_BlYM
- **Дата:** 06.12.2018
- **Длительность:** 2:47
- **Просмотры:** 48,166
- **Источник:** https://ekstraktznaniy.ru/video/14383

## Описание

The paper "Towards Learning a Realistic Rendering of Human Behavior" is available here:
https://compvis.github.io/hbugen2018/

Pick up cool perks on our Patreon page:
› https://www.patreon.com/TwoMinutePapers

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty.
https://www.patreon.com/TwoMinutePapers

Thumbnail background image credit: https://pixabay.com/pho

## Транскрипт

### Segment 1 (00:00 - 02:00) []

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In a previous episode, we discussed a technique where we could specify a low-quality image of a test subject, and a photo of a different person. What happens then is that the algorithm transformed our test subject into that pose. With an other algorithm, we can transfer our facial gestures onto a different target subject. And this new method does something completely different — here, we can copy a full-body movement from a video and transfer it onto a target person. This way, we can appear to be playing tennis, baseball or finally be able to perform a hundred chin-ups. Well, at least, on Instagram. Now, look here! Up here you see the target poses, and on the left, the target subjects. And between them, we see the output of this algorithm with the target subjects taking these poses. As you see, the algorithm is quite consistent in a sense that during walking, we often encounter the same pose, which results in a very similar image. That’s exactly the kind of consistency that we’re looking for! Remarkably, this algorithm is also able to synthesize angles of these target subjects that it hadn’t seen before. For instance, the backside of this person was never shown to the algorithm and it correctly guesses interesting details, like the belt of this character to continue around the waist. Really cool, I love it! We can also put these characters in a virtual environment and animate them there. Now, this work, like most papers that explore something completely new, is raw and experimental. Clearly, there are issues with occlusions, flickering, and the silhouettes of the characters give the trick away. Anyone looking at this footage can tell in a second that it is not real. The reason I am so excited about this is that now, we finally see that this is a viable concept, and it will provide fertile grounds for new followup research works to be improved upon. Two more papers down the line, it will probably work in HD and look significantly better. Just imagine how amazing that would be for movies, computer games and telepresence applications. Sign me up! And, computer graphics research has a vast body of papers on how to illuminate these characters to appear as if they were really there in this environment. Will this be done with computer graphics, or through AI? I am really keen to see how these fields will come together to solve such a challenging problem. What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!
