# Can An AI Perform A Cartwheel? 🤸‍♂️

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=3IFLVOaFAus
- **Дата:** 14.05.2021
- **Длительность:** 6:43
- **Просмотры:** 58,422

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

📝 The paper "Learning and Exploring Motor Skills with Spacetime Bounds" is available here:
https://milkpku.github.io/project/spacetime.html

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic,  Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=3IFLVOaFAus) Introduction

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we will see how this AI-based technique can help our virtual characters not only learn

### [0:06](https://www.youtube.com/watch?v=3IFLVOaFAus&t=6s) Reference Motion

new movements, but they can even perform them with style. Now, here you see a piece of reference motion - this is what we would like our virtual character to learn. The task is to then, enter a physics simulation where try to find the correct joint angles and movements to perform that.

### [0:32](https://www.youtube.com/watch?v=3IFLVOaFAus&t=32s) Style

Of course, this is already a challenge because even a small difference in joint positions can make a great deal of difference in the output. Then, the second, more difficult task is to do this with style. No two people perform cartwheels exactly the same way, so would it be possible to have our virtual characters imbued with style, so that they, much like people, would have their own kinds of movement? Is that possible somehow? Well, let’s have a look at the simulated characters.

### [1:03](https://www.youtube.com/watch?v=3IFLVOaFAus&t=63s) Simulation

Nice, so this chap surely learned to at the very least reproduce the reference motion, but let’s stop the footage here and there and look for differences. Oh yes, this is indeed different. This virtual character indeed has its own style, but at the same time, it is still faithful to the original reference motion. This is a magnificent solution to a very difficult task. And the authors made it look deceptively easy, but you will see in a moment that this is really challenging. So how does all this magic happen? How do we imbue these virtual characters with style?

### [1:49](https://www.youtube.com/watch?v=3IFLVOaFAus&t=109s) Defining Style

Well, let’s define style as a creative deviation from the reference motion, so it can be different, but not too different, or else, this happens. So, what are we seeing here? Here, with green, you see the algorithm’s estimation of the center of mass for this character. And our goal would be to reproduce that as faithfully as possible. That would be the copying machine solution. But, here comes the key for style. And that key is using spacetime bounds. This means that the center of mass of the character can deviate from the original, but only as long as it remains strictly within these boundaries. And that is where the style emerges! If we wish to add a little style to the equation, we can set relatively loose spacetime bounds around it, leaving room for the AI to explore. If we wish to strictly reproduce the reference motion, we can set the bounds to be really tight, instead. This is a great technique to learn running, jumping, rolling behaviors, it can even perform a stylish cartwheel and backflips. Oh yeah. Loving it. These spacetime bounds also help us retarget the motion to different virtual body types. And furthermore, it also helps us salvage really bad quality reference motions and make something useful out of them. So, are we done here? Is that all?

### [3:28](https://www.youtube.com/watch?v=3IFLVOaFAus&t=208s) Bounds

No, not in the slightest! Now, hold on to your papers because here comes the best part. With these novel spacetime bounds, we can specify additional stylistic choices to the character moves. For instance, we can encourage the character to use more energy for a more intense dancing sequence, or, we can make it sleepier by asking it to decrease its energy use.

### [3:57](https://www.youtube.com/watch?v=3IFLVOaFAus&t=237s) Body Volume

And I wonder if we can put bounds on the energy use, can we do more, for instance, do the same with, for instance, body volume use. Oh yeah! This really opens up new kinds of motions that I haven’t seen virtual characters perform yet. For instance, this chap was encouraged to use its entire body volume for a walk, and thus, looks like someone who is clearly looking for trouble. And this poor thing just finished their paper for a conference deadline and is barely alive.

### [4:33](https://www.youtube.com/watch?v=3IFLVOaFAus&t=273s) Mixing motions

We can even mix multiple motions together. For instance, what could be a combination of a regular running sequence, and a bent walk? Well, this. And if we have a standard running sequence, and a happy walk, we can fuse them into a happy running sequence. How cool is that? So, with this technique, we can finally not only teach virtual characters to perform nearly any kind of reference motion, but we can even ask them to do these with style. What an incredible idea. Loving it! Now, before we go. I would like to show you a short message that we got that melted my heart.

### [5:21](https://www.youtube.com/watch?v=3IFLVOaFAus&t=321s) Message from Nathan

This I got from Nathan, who has been inspired by these incredible works and he decided to turn his life around, and go back to study more. I love my job, and reading messages like this is one of the absolute best parts of it. Congratulations Nathan, thank you so much, and good luck! If you feel that you have a similar story with this video series, make sure to let us know in

### [5:48](https://www.youtube.com/watch?v=3IFLVOaFAus&t=348s) Sponsor Message

the comments! Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13911*