# EA’s New AI: Next Level Gaming Animations!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=wAbLsRymXe4
- **Дата:** 08.01.2023
- **Длительность:** 6:17
- **Просмотры:** 254,356

## Описание

❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 

📝 The paper "DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds" is available here:
https://github.com/sebastianstarke/AI4Animation

My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Edward Unthank, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Mastodon: https://sigmoid.social/@twominutepapers
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=wAbLsRymXe4) Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today we are going to have a look at  this new AI that is able to look at a   bunch of unstructured motion data, like  this, then place a character in a video   game and see all the amazing things  that it can learn from it. Walking,   running, dancing, you name it. And, look  at that! Yes, this works for bipeds, and   quadrupeds at the same time. Now that is a quite  a challenge, so let’s see what is going on here. First, why is this a challenging problem?   Why do we not just copy the movements from   the training data? These were recorded  by real humans and dogs after all. Well,   that will not cut it here. You see, in video  games, we get to control these characters,   which means that we can stop any movement  at any time, and start a new one. Aha!    So this requires neural networks to look at  a big soup of motion data. And how big is   this soup? I will tell you in a moment,  and I think you will be very surprised. And wait, even learning the essence of  these movements is not enough, it needs to   learn about transitions too by itself. Oh yes,  transitions are key. Why? Well, look at this. Here is a previous method, and whenever we  change direction, look. These transitions are   quite unnatural, and this is not parkour, we  are still talking about just running around,   a much simpler task to animate. And it  is still not working too well. Not good. And oh my! Are you seeing what I  am seeing? What is that? Oh yes,   that’s right. This is foot sliding. The bane  of our existence. More on that in a moment. But now, let’s see how the new  method solves this problem.    Wow. Now that is fantastic! So much better. And, what about the doggies? Well,  previous methods are not too bad here,   but the new one is so much more fluid.   The movement of the body is now better,   but if you don’t find that too noticeable, also  check movement of the tail too. Previous methods   seem a lot more tentative, while  the new one is much more lifelike. This new technique can also perform  more advanced actions, I loved how   it performs the dribbling here. And note that  the ball is controlled by the physics engine,   so the character has to react to the  ball’s movement quickly and convincingly. And, the dance moves it has are really  cool too. And it can not only dance,   but it can dance so much more convincingly  than previous methods. Loving it! But, if we wish to get even more crazy,   we can even combine different movements for  the lower and upper body, and I know from   previous papers that we can’t just copy  paste two motions together to get this,   these are so much more challenging. And the  new method does it with flying colors. So good! One of the many key insights in the paper  is that they propose using this diagram,   which is the phase space. And as we pick a point  and move it around there, our character starts   moving. However, not all spaces are this  intuitive. Look, with previous techniques,   good movements required these crazy paths  and were thus, more difficult to achieve. But wait, we have two more incredible insights  about this paper. Now one, hold on to your   papers because this is where I fell off the  chair when reading this paper. As promised,   let’s have a look at how much training data this  technique required to learn all these beautiful,   fluid movements. What? Are you seeing  what I am seeing? That is impossible,   right? Look. It learned to animate quadrupeds  by just seeing 17 minutes of footage,   and perhaps even better, dancing, just  9 minutes. That is absolutely amazing. And, the second insight is about foot sliding.   This new technique shows less foot sliding in   most cases than previous methods, and even  in the worst case, it is comparable. Look,   this was a huge problem for previous techniques.   But really, whatever metric we use to measure the   new one against previous techniques, it performs  better in pretty much all of them. Incredible. And huge respect to Sebastian Starke, first  author of this work who just keeps publishing   these incredible papers, one after another, and  almost always with the source code as well. And,

### [5:00](https://www.youtube.com/watch?v=wAbLsRymXe4&t=300s) Segment 2 (05:00 - 06:00)

yes, this also means that the source code  for this technique is also available. So,   from now on, we can soon expect  much more lifelike character   movements in our games and virtual  worlds. What a time to be alive! Thanks for watching and for your generous  support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13335*