# New AI: 6,000,000,000 Steps In 24 Hours!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=_AnovXOJOaw
- **Дата:** 10.12.2023
- **Длительность:** 8:28
- **Просмотры:** 106,164

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

📝 The paper "Physics-based Motion Retargeting from Sparse Inputs" is available here:
https://www.cs.ubc.ca/~dreda/retargeting.html

📝 My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Bret Brizzee, Gaston Ingaramo, Gordon Child, Jace O'Brien, Jie Yu, John Le, Kyle Davis, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Putra Iskandar, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu
Károly Zsolnai-Fehér's research works: https://cg.tuwien.ac.at/~zsolnai/
Twitter: https://twitter.com/twominutepapers

## Содержание

### [0:00](https://www.youtube.com/watch?v=_AnovXOJOaw) Segment 1 (00:00 - 05:00)

I am very happy today, because we have  an incredible paper here. Get this,   it helps us transform into virtual  characters and transfer our movements   to them. Wow. But it gets better - we  can even control virtual characters with   different morphology than our bodies. We  can be a tiny little mouse, or a dinosaur. And wait, it gets even better. From  a three-point input. Not full body.    This is not motion capture where  an entire body suit is used and   we know the movement of every body part.   No sir! What we know here is so much more   challenging - just the movement of the VR  headset and two controllers. Nothing else.    That is really tough. And to me, it sounds  absolutely impossible. So, is it impossible? Well, almost! Let’s see if this paper  can pull it off. Dear Fellow Scholars,   this is Two Minute Papers  with Dr. Károly Zsolnai-Fehér. First, for a training step, it uses motion  capture data to learn which human joints   match the virtual character's joints for training.   Unfortunately, this does not work great, but wait,   don’t despair. Not all is lost. Why is that? Well,  we can now create a reinforcement learning agent   which can lean on this jerky animation, and learn  to control it properly. Here. Whoa! This looks so   much better. Loving it. And note that at this  step, we can throw away all the motion capture   data and we only need the headset and two  controllers. So far so good. So, are we done? Well, we are not done yet.   Not even close. We also need   a little trickery for dealing with  the head orientation, or otherwise,   we will look like this. And it gets even worse.   Due to the heavy head of the mouse character,   even this can happen. Ouch. But, after this  additional step, look. Mmm! Much better. Additionally, to add more realism,   we can even decide how we wish to control  the tail of the dinosaur as well. Loving it. Well, we are still not done yet.   The paper also proposes enriching the training   data with additional information for the AI to  learn, and then throwing this data away. But, come   on, if we don’t do that, what’s the worst that  can happen? Well, this. This can happen. Ouch. But, putting this all together, hold on to your  papers Fellow Scholars, and look! It finally   works! Now, we have an absolutely miraculous  algorithm that can perform these movements across   different morphologies. With different character  types. It does it in an energy efficient way,   so that we get relatively elegant, smooth  movements, and not those jerky ones. Excellent. But, wait a second…are you thinking  what I am thinking? Remember,   it has information from the headset and  the controllers. Does that mean that? Yes,   that’s exactly what it means. It has  no lower body information whatsoever,   and yet, it can still reconstruct all these  movements really well. Wow, that is crazy. And it gets better. Larger and  smaller characters will be moved   around differently. You just  walk around in your own pace,   but for them, depending on their size, more  or maybe fewer tippy taps are required. And advanced movements are also possible.   Staggering, balancing, you name it. Well,   for smaller, more agile characters, that is. And  we can even try our dancing moves. In this case,   they are perhaps a little inept, but in  a very cute way. And not just movement,   but acting can be done. So good! Loving it. And yes, it gets even better. If that is at   all possible. I have found three more  mind blowing things about this paper. One, it has been trained on just one  machine. With one consumer graphics card,   and not even a high-end one. The training  took 24 hours, and this step only has to   be done once, and then, we can use it  in real time for as long as we wish. Two, so how much training data is required  for this? Get this: only 4 hours. It looks

### [5:00](https://www.youtube.com/watch?v=_AnovXOJOaw&t=300s) Segment 2 (05:00 - 08:00)

at 4 hours of footage from a bunch of humans, and  then, it generalizes the movement to new people. And, three. What you are about to see is the  reason Two Minute Papers exists. Look. This   paper has been seen by less than a 100 people.   Such an amazing work, and I am worried that if we   don’t talk about it here, almost no one will talk  about it. So if you Fellow Scholars would help   me spread the word of these amazing AI / computer  graphics papers, that would be amazing. Thank you. Now don’t forget, this technique  is still not completely automatic,   I noted earlier that some of the joints of  the human are mapped to the new character,   so you can’t just drop in a completely new  character without some additional work. However,   in return, vastly different morphologies work.   I would estimate that not a great deal of work   has to be done per character, but it still  requires a bit of work an expertise. And I   bet that just one more paper down the line,  and we will be able to do this automatically. Just imagine a world where we can all become  virtual characters, where we can play together   and the way we move is translated by an  algorithm to feel like we are really any   creature we want. Even if they have a completely  different morphology, big head, small head,   floppy ears, or a tail, anything you can imagine.   And I am absolutely blown away by the fact that   it may not even need to track our lower body for  that. Only the three-point input. Head and hands. And don’t forget, the differences  between these models are huge. There   are vast variations in weight, 25x difference  between the lightest and heaviest character,   tail joints may or may not exist, and even if  they do, there is a different amount of joints,   same with ear joints, and the amount of  torque they can produce is also subject   to a great deal of variation. And having a  modern algorithm that can translate all of   our motions to these characters is absolutely  incredible. Wow. What a time to be alive!

---
*Источник: https://ekstraktznaniy.ru/video/12865*