This Neural Network Combines Motion Capture and Physics
3:39

This Neural Network Combines Motion Capture and Physics

Two Minute Papers 25.01.2020 127 545 просмотров 6 870 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 📝 The paper "DReCon: Data-Driven responsive Control of Physics-Based Characters" is available here: - https://montreal.ubisoft.com/en/drecon-data-driven-responsive-control-of-physics-based-characters/ - https://static-wordpress.akamaized.net/montreal.ubisoft.com/wp-content/uploads/2019/11/13214229/DReCon.pdf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev

Оглавление (1 сегментов)

Segment 1 (00:00 - 03:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In this series, we often talk about computer animation and physical simulations, and these episodes are typically about one or the other. You see, it is possible to teach a simulated AI agent to lift weights and jump really high using physical simulations to make sure that the movements and forces are accurate. The simulation side is always looking for correctness. However, let’s not forget that things also have to look good. Animation studios are paying a fortune to record motion capture data from real humans and sometimes even dogs to make sure that these movements are visually appealing. So, is it possible to create something that reacts to our commands with the controller, looks good, and also adheres to physics? Well, have a look! This work was developed at Ubisoft La Forge. It responds to our input via the controller and the output animations are fluid and natural. Since it relies on a technique called deep reinforcement learning, it requires training. You see that early on, the blue agent is trying to imitate the white character, and it is not doing well at all. It basically looks like me when going to bed after reading papers all night. The white agent’s movement is not physically simulated and was built using a motion database with only 10 minutes of animation data. This is the one that is in the “looks good” category. Or, it would look really good if it wasn’t pacing around like a drunkard, so the question naturally arises, who in their right minds would control a character like this? Well, of course, no one! This sequence was generated by an artificial, worst-case player which is a nightmare situation for any AI to reproduce. Early on, it indeed is a nightmare. However…after 30 hours of training, the blue agent learned to reproduce the motion of the white character, while being physically simulated. So, what is the advantage of that? Well, for instance, it can interact with the scene better, and is robust against perturbations. This means that it can rapidly recover from undesirable positions. This can be validated via something that the paper calls impact testing. Are you thinking what I am thinking? I hope so, because I am thinking about throwing blocks at this virtual agent, one of our favorite pastimes at Two Minute Papers, and it will be able to handle them. Whoops! Well, most of them anyway. It also reacts to a change in direction much quicker than previous agents. If all that was not amazing enough, the whole control system is very light, and takes only a few microseconds, most of which is spent by not even the control part, but the physics simulation. So, with the power of computer graphics and machine learning research, animation and physics can now be combined beautifully, it does not limit controller responsiveness, looks very realistic, and it is very likely that we’ll see this technique in action in future Ubisoft games. Outstanding. This video was supported by you on Patreon. If you wish to watch these videos in early access, or get your name immortalized in the video description, make sure to go to Patreon. com/TwoMinutePapers and pick up one of those cool perks, or, we are also test driving the early access program here on YouTube, just go ahead and click the join button, or use the link in the description. Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник