# NVIDIA’s New AI Is Gaming With Style!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=_X6zIVPlJ6w
- **Дата:** 14.08.2023
- **Длительность:** 6:01
- **Просмотры:** 121,935
- **Источник:** https://ekstraktznaniy.ru/video/13075

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 

📝 The paper "Synthesizing Physical Character-Scene Interactions" is available here:
https://dl.acm.org/doi/abs/10.1145/3588432.3591525
https://research.nvidia.com/publication/2023-08_synthesizing-physical-character-scene-interactions

My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, 

## Транскрипт

### Segment 1 (00:00 - 05:00) []

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today we are going to talk about the unfortunate  fortunate situation of computer animation in   video games. Now I hear you asking, Doctor,  what do you mean by unfortunate fortunate? Well, let’s start with the unfortunate part.   If we have created a beautiful virtual world,   and we have populated it with a bunch  of characters, we now wish to see the   two interact. And therein lies the problem. With  previous techniques, either the quality of motion   is lacking, or generality. In other words, if the  motions are of high quality, they don’t transfer   to new scenes, or if they do transfer to new  scenes, they are typically not high quality. So that is the unfortunate situation. Now  what about the fortunate part? What is so   fortunate? Well, I feel very fortunate  today. Get this. I seem to be the fourth   person to have read this paper. And I am  very excited to say that I think I have   found a little gem to share with you that might  just solve the problem we have at hand. Look. This one builds on top of the Adversarial Motion  Priors (AMP) paper from SIGGRAPH 2021. This paper   could place a virtual character in a virtual  environment, and it looked at a dataset of real   clips of movements like running, jumping and  rolling, and even punching, and it was able to   learn to use those motions in these video game  worlds to its advantage to finish a level. Now,   what about this new technique? Does it do what  we really want to it to do? Yes, yes it does!    Fantastic. It builds on top of this AMP technique  and generalizes it so that these characters   will not just run around in the scene, but it  teaches them to interact with the scene as well. So what does this mean? Why is  this new? Well, because one,   this is finally able to create motions that  are both high quality as the one you can see   here as it is encouraged to solve these tasks  with style. And two, this one also generalizes   to new objects and environments at  the same time. An absolute miracle. This was trained with a lot  of different object types,   orientations and locations. And when I  say that they can do this with style,   I mean style. Look  at this jolly chap. He is fabulous. Love it.    And these objects are newly designed objects that  the AI hasn’t seen yet. And it can still use them   just fine. High-quality motion is being created  in a way that finally generalizes. So good! And this AI has undergone strict testing,  and the scientists found that we can put   any kind of box anywhere, this little AI  will be able to bring it back. I also love   this scene which shows a variety of different  motions for walking towards this object and   sitting on it. If only they have made  a musical chairs version of it where   one of these poor little characters will be  eliminated each round. May the best AI win! And, are you thinking what I am thinking? Do  you know what is coming up now? Of course,   testing against external perturbations. Oh  yes, the favorite pastime of the computer   graphics researcher. Look. These folks are  not only robust, but super patient as well.    Look at these nasty tricks the scientists  tried to pull on them. Not a single one   worked. So good! Loving it! Look…whoop!   Whoop! Alright, you get to sit down now.    Whoop! Sorry, I had to. Okay, you can sit  now. Good job! A+ for patience, little AI! Now, not even this technique is perfect,  the success rates are not a hundred percent,   but it is typically above 90% and I  am fairly sure this can be improved   or at the very least be filtered in a way  that it approaches nearly a 100% for real   applications. If we start messing with them,  like throwing boxes at them or tripping them up,   this success rate falls by about 3 to 10%.   That is very respectable, especially given   by how badly we have treated them. Sorry about  that. But we got to proceed with the testing. So, in a new paper that only 4 people had  seen yet, we finally get this age-old problem   in computer animation solved. I can’t  believe it. What a time to be alive!

### Segment 2 (05:00 - 06:00) [5:00]

Thanks for watching and for your generous  support, and I'll see you next time!
