Ubisoft’s New AI Predicts the Future of Virtual Characters! 🐺
5:36

Ubisoft’s New AI Predicts the Future of Virtual Characters! 🐺

Two Minute Papers 22.12.2021 379 647 просмотров 16 553 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "SuperTrack – Motion Tracking for Physically Simulated Characters using Supervised Learning" is available here: https://static-wordpress.akamaized.net/montreal.ubisoft.com/wp-content/uploads/2021/11/24183638/SuperTrack.pdf https://montreal.ubisoft.com/en/supertrack-motion-tracking-for-physically-simulated-characters-using-supervised-learning/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to take an AI and use it to synthesize these beautiful, crisp movement types. And you will see in a moment that this can do much, much more. So how does this process work? There are similar previous techniques that took a big soup of motion capture data and outbid each other what they could learn from it. And they did it really well - for instance, one these AIs was able to not only learn these movements, but even improve them, and even better, adapt them to different kinds of terrains. This other work used a small training set of general movements to reinvent popular high jump technique, the Fosbury flop by itself. This allows the athlete to jump backward over the bar, thus lowering their center of gravity. And, it could also do it on Mars. How cool is that? But, this new paper takes a different vantage point. Instead of asking for more and training videos, it seeks to settle with less. But first, let’s see what it does. Yes, we see that this new AI can match reference movements well, but that’s not all of it. Not even close. The hallmark of a good AI is not being restricted to just a few movements, but being able to synthesize to a great variety of different motions. So, can it do that? Wow, that is a ton of different kinds of motions, and the AI always seems to match the reference motions really well across the board. Bravo! We’ll talk more about what that means in a moment. And here comes the best part, it does not only generalize to a variety of motions, but body types as well. And we can bring these body types to different virtual environments too. This really seems like the whole package. And now, if you have been holding on to your papers, now, squeeze that paper, because we can control them, in real time! My goodness. So, how does all this work? Let’s see…yes, here, the green is the target movement we would like to achieve, and the yellow is the AI’s result. Now, here, the trails represent the past. So, how close are they? Well, of course, we don’t know exactly yet, so let’s line them up, and…now we’re talking! They are the almost the same. But wait, does this even make sense? Aren’t we just inventing a copying machine here? What is so interesting about being able to copy an already existing movement? But, no-no-no, we are not copying here. Not even close. What this new work does instead is that we give it the initial pose, and ask it to predict the future. In particular, we ask what is about to happen to this model. And the result is…a messy trail. So, what does this mess mean? Well, actually, the mess is great news, this means that the true physics results and the AI predictions line up so well that they almost completely cover each other. But wait, this is not the first technique to attempt this…what about previous methods? Can they also do this? Well, these are all doing pretty good. Maybe this new work is not that big of an improvement…wait a second…oh boy. One contestant is down. And now, two have failed, and I love how they still keep dancing while down. A+ for effort, little AIs. But the third one, is still in the game…careful…ouch! Yup, the new method is absolutely amazing. No question about it. And of course, do not be fooled by these mannequins, these can be mapped to real characters in real video games too. So, this amazing new method is able to create higher quality animations, let us grab a controller and play with them, and also requires a shorter training time. Not only that, but the new method predicts more, and hence, relies much less on the motion dataset we feed it, and therefore, it is also less sensitive to its flaws. I love this. A solid step towards democratizing the creation of superb computer animations.

Segment 2 (05:00 - 05:00)

What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник