These AI-Driven Characters Dribble Like Mad! 🏀
6:04

These AI-Driven Characters Dribble Like Mad! 🏀

Two Minute Papers 01.08.2020 359 356 просмотров 15 341 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/cayush/pytorchlightning/reports/How-to-use-Pytorch-Lightning-with-Weights-%26-Biases--Vmlldzo2NjQ1Mw 📝 The paper "Local Motion Phases for Learning Multi-Contact Character Movements" is available here: https://github.com/sebastianstarke/AI4Animation 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/

Оглавление (6 сегментов)

<Untitled Chapter 1>

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. In computer games and all kinds of applications where we are yearning for realistic animation, we somehow need to tell our computers how the different joints and body parts of these virtual characters are meant to move around over time. Since the human eye is very sensitive to even the tiniest inaccuracies, we typically don’t program these motions by hand, but instead, we often record a lot of real-life motion capture data in a studio, and try to reuse that in our applications. Previous techniques have tackled quadruped control, and we can even teach bipeds to interact with their environment in a realistic manner. Today we will have a look at an absolutely magnificent piece of work, where the authors carved out a smaller subproblem, and made a solution for it that is truly second to none. And this subproblem is simulating virtual characters playing basketball. Like with previous works, we are looking for realism in the movement, and for games, it is also a requirement that the character responds to our controls well. However, the key challenge is that all we are given, is 3 hours of unstructured motion capture data. That is next to nothing, and from this next to nothing, a learning algorithm has to learn to understand these motions so well, that it can weave them together, even when a specific movement combination is not present in this data. That is quite a challenge. Compared to many other works, this data is really not a lot, so I am excited to see what value we are getting out of these three hours. At first I thought we’d get only very rudimentary motions, and boy, was I dead wrong on that

Locomotion Control

one. We have control over this character and can perform these elaborate maneuvers, and it remains very responsive even if we mash the controller like a madman, producing these sharp turns. As you see, it can handle these cases really well. And not only that, but it is so well done, we can even dribble through a set of obstacles, leading to a responsive, and enjoyable gameplay.

Dribble Maneuvers

About these dribbling behaviors. Do we get only one, boring motion, or not? Not at all, it was able to mine out not just one, but many kinds of dribbling motions, and is able to weave them into other moves as soon as we interact with the controller. This is already very convincing, especially from just three hours of unstructured motion

Ball Holding

capture data. But this paper is just getting started. Now, hold on to your papers, because we can also shoot and catch the ball, move it around

Shoot and Catch

that is very surprising, because it has looked at so little shooting data, let’s see…yes, less than 7 minutes. My goodness. And it keeps going, what I have found even more surprising is that it can handle unexpected movements, which I find to be even more remarkable given the limited training data. These crazy corner cases are typically learnable when they are available in abundance in the

Opponent Interaction

training data, which is not the case here. Amazing. When we compare these motions to a previous method, we see that both the character and the ball’s movement is much more lively. For instance, here, you can see that the Phase-Functioned Neural Network, PFNN in short, almost makes it seem like the ball has to stick to the hand of the player for an unhealthy amount of time to be able to create these motions. It doesn’t happen at all with the new technique. And remember, this new method is also much more responsive to the player’s controls, and thus, more enjoyable not only to look at, but to play with. This is an aspect that hard is hard to measure, but it is not to be underestimated in the general playing experience. Just imagine what this research area will be capable of not in a decade, but just two more papers down the line. Loving it. Now, at the start of the video, I noted that the authors carved out a small use-case, which is training an AI to weave together basketball motion capture data in a manner that is both realistic and controllable. However, many times in research, we look at a specialized problem, and during that journey we learn general concepts that can be applied to other problems as well. That is exactly what happened here, as you see, parts of this technique can be generalized for quadruped control as well. This good boy is pacing and running around beautifully. And…you guessed right, our favorite biped from the previous paper is also making an introduction. I am absolutely spellbound by this work, and I hope that now, you are too. Can’t wait to see this implemented in newer games and other real-time applications. What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник