❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers
📝 The paper "Learning Time-Critical Responses for Interactive Character Control" is available here:
https://mrl.snu.ac.kr/research/ProjectAgile/Agile.html
❤️ Watch these videos in early access on our Patreon page or join us here on YouTube:
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers
Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu
Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join
Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2
Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/
#gamedev
Оглавление (2 сегментов)
Segment 1 (00:00 - 05:00)
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to take a bunch of completely unorganized human motion data, give it to an AI, grab a controller, and get an amazing video game out of it, which we can play almost immediately. Why almost immediately? I’ll tell you in a moment. So how does this process work? There are similar previous techniques that took a big soup of motion capture data and outbid each other what they could learn from it. And they did it really well - for instance, one these AIs was able to not only learn these movements, but even improve them, and even better, adapt them to different kinds of terrains. This other work used a small training set of general movements to reinvent a popular high jump technique, the Fosbury flop by itself. This allows the athlete to jump backward over the bar, thus lowering their center of gravity. And, it could also do it on Mars. So cool! In the meantime, Ubisoft also learned to not only simulate, but even predict the future motion of video game characters, thus speeding up this process. You can see here how well its predictions line up with the real reference footage. And it could also stand its ground when previous methods fail. Ouch. So, are we done here? Is there nothing else to do? Of course there is! Here, in goes this soup of unorganized motion data, which is given to train a deep neural network, and then…this happens! Yes, the AI learned how to weave together these motions so well that we can even grab a controller, and start playing immediately. Or…almost immediately. And with that, here comes the problem. Do you see it? There is still a bit of a delay between the button press and the motion. A simple way of alleviating that would be to speed up the underlying animations. Well, that’s not going to be it, because this way, the motions lose their realism. Not good. So, can we do something about this? Well, hold on to your papers, and have a look at how the new method addresses it! We press the button, and, yes, now that is what I call quick, and fluid motion. Yes, this new AI promises to learn time-critical responses to our commands. What does that mean? Well, see the white bar here? By this amount of time, the motions have to finish, and the blue is where we are currently. So the time critical part means that it promises that the blue bar will never exceed the white bar. This is wonderful, just ask a gamer this: has it happened that you already pressed the button, but the action didn’t execute in time. They will say that it happens all the time. But with this, we can perform a series of slow U-turns, and then, progressively decrease the amount of time that we give to the character, and see how much more agile it becomes. Absolutely amazing. The motions really change, and I find all of them quite realistic. Maybe this could be connected to a game mechanic where the super quick, time critical actions deplete the stamina of our character quicker, so we have to use them sparingly. But that’s not all it can do. Not even close. We can even chain many of these crazy actions together, and as you see, our character does these amazing motions that looks like it came straight out of The Witcher series. Loving it. What you see here it a teacher network that learns to efficiently pull off these moves, and then, we fire up a student neural network that seeks to become as proficient as the teacher, but with a smaller and more compact neural network. This is what we call policy distillation. So, is the student as good as its teacher? Let’s have one more look at the teacher…and the student. They are very close. Actually, wait a second…did you see it? The student is actually even more responsive than its teacher was. This example showcases it more clearly. It can even complete this slalom course, and we might even make a parkour game with it. And here comes the best part. The required training data is not a few days, not even a few hours, only a few minutes, and the training time is in the order of a few hours, and this we only have to do once
Segment 2 (05:00 - 06:00)
and we are free to use the trained neural network for as long as we please. Now, one more really cool tidbit in this work is that this training data mustn’t be realistic, it can be a soup of highly stylized motions, and it can still weave them together really well. It is possible that? Yes, my goodness, it is possible! The input motions don’t even have to come from humans, they can come from quadrupeds too. This took only 5 minutes of motion capture data, and the horse AI became able to transition between these movement types. This is yet another amazing tool in democratizing character animation. Absolutely amazing. What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!