# This AI Makes You A Virtual Stuntman! 💪

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=ayuEnJmwocE
- **Дата:** 26.05.2022
- **Длительность:** 5:58
- **Просмотры:** 73,791

## Описание

❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 
A report of theirs is available here: http://wandb.me/human-pose-estimation

📝 The paper "Human Dynamics from Monocular Video with Dynamic Camera Movements" is available here:
https://mrl.snu.ac.kr/research/ProjectMovingCam/MovingCam.html

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=ayuEnJmwocE) Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today we will try to create a copy of  ourselves, and place it into a virtual world. Now this will be quite a challenge. Normally,  to do this, we have to ask an artist to create   a digital copy of us, which takes a lot of time  and effort. But, there may be a way out. Look.    With this earlier AI-based technique, we can take  a piece of target geometry, and have an algorithm   try to rebuild it to be used within a virtual  world. The process is truly a sight to behold.    Look at how beautifully it sculpts this piece of  geometry until it looks like our target shape! This is wonderful, but wait a second. If  we wish to create a copy of ourselves,   we probably want it to move around too. This  is, however, a stationary piece of geometry.    No movement is allowed here. So,  what do we do? What about movement? Well, have a look at this new technique,  where getting a piece of geometry   with movement cannot get any simpler than this  - just do your thing, record it with a camera,   and give it to the AI. And, I have  to say that I am a little skeptical,   look, this is what a previous technique could get  us. This is not too close to what we are looking   for. So, let’s see what the new method can do  with this data. And! Uh-oh. This is not great. So,   is this it? Is the geometry cloning dream dead?   Well, don’t despair quite yet! This issue happens   because our starting position and orientation  is not known to the algorithm, but,   it can be remedied. How? Well, by adding  additional data to the AI to learn from. And now, hold on to your papers,  and let’s see what it can do now!    And…oh my goodness. Are you seeing what I am  seeing? Our movement is now replicated in a   virtual world almost perfectly! Look at that  beautiful animation. Absolutely incredible. And, if even this is not good enough, look  at this result too. So good! Loving it.   And, believe it or not, it has even more coolness  up the sleeve. If you have been holding on to   your papers so far, now, squeeze that paper,  because here comes my favorite part of this work.    And that is a step that the authors  call scene fitting. What is that?   Essentially, what happens is that the AI  reimagines us as a video game character, and sees   our movement, but does not have an idea as to what  our surroundings look like. What it does is that   from this video data, it tries to reconstruct our  environment, essentially, recreating it as a video   game level. And that is quite a challenge. Look,  at first, it is not close at all. But, over time,   it learns what the first obstacle should look  like, but still, the rest of the level... not   so much! Can this still be improved? Let’s have  a look together. As we give it some more time,   and our character a few more concussions,  it starts to get a better feel of the level. And it really works for a variety of  difficult, dynamic motion types. Cartwheels,   backflips, parkour jumps, dance moves, you name  it. It is a robust technique that can do it all.    So cool! And note that the authors of the  paper gave us not just the blueprints for   the technique in the form of a research paper,  but they also provide the source code of this   technique to all of us, free of charge. Thank  you so much! I am sure this will be a huge help   in democratizing the creation of video  games and all kinds of virtual characters. And, if we add up all of these together. We  get this. This truly is a sight to behold.    Look! So much improvement just one more paper down  the line. And just imagine what we will be able to   do a couple more papers down the line. Well, what  do you think? Let me know in the comments below!

### [5:00](https://www.youtube.com/watch?v=ayuEnJmwocE&t=300s) Segment 2 (05:00 - 05:00)

Thanks for watching and for your generous  support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13556*