# NVIDIA’s New AI: Making Games Come Alive!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=PncTPQJDUDs
- **Дата:** 17.06.2023
- **Длительность:** 6:01
- **Просмотры:** 114,873

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 

📝 Resources:
https://research.nvidia.com/publication/2023-06_neuralangelo-high-fidelity-neural-surface-reconstruction
https://arxiv.org/abs/2205.03943
https://brachiation-rl.github.io/brachiation/
https://github.com/brachiation-rl/brachiation
https://dl.acm.org/doi/10.1145/3528233.3530728

My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=PncTPQJDUDs) Intro

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today you are going to see glimpses of the future  many of which come from the amazing AI research   papers that appeared not so long ago. First,  NVIDIA just showcased their new system which

### [0:15](https://www.youtube.com/watch?v=PncTPQJDUDs&t=15s) AI Dialogue

can automatically come up with dialogue for  a virtual character in a video game. First,   the text what the character needs to say will be  synthesized by an AI, then a text to voice AI will   make it actually say it in a human voice, and the  mouth movements will also be synthesized. Look. It sounds a little stiff, could be a bit  more natural, but this is only the first   version of something that was previously nearly  impossible. And now, here it is, and it may be   part of the video games and perhaps even the  virtual assistants of the future. This can be   combined with Unreal Engine 5 and Metahuman, which  is an amazing system for creating virtual humans. And, look at this. What is this? This is  NVIDIA’s Neuralangelo, this performs something

### [1:25](https://www.youtube.com/watch?v=PncTPQJDUDs&t=85s) Neural Angelo

that researchers call photogrammetric neural  surface reconstruction. What does that mean? It   means that you can scan your environment with your  phone camera, and out comes a 3D mesh that you can   use to create a virtual world for video games, or  have a videoconference that actually takes place   in this environment. Not just a 2D image, but  actually being there. Previous techniques really

### [1:55](https://www.youtube.com/watch?v=PncTPQJDUDs&t=115s) High Fidelity Reconstruction

struggled with this. Look, here is an input scene  with a screw clamp. This is the Neural Implicit   Surfaces paper from two years ago, and as you  see, it was unable to reconstruct the details   on the screw. A more recent technique, NeuralWarp  from just one year ago provides smoother results,   that much is clear, however the high-frequency  details are unfortunately still lost. Now,   with Neuralangelo, look. Wow. Now that is what  I call a high-quality reconstruction. Screws   finally looks like screws! And the paper reveals  that this particular scene was not just a fluke,   the new technique is so much better than the  previous ones on a number of different scenes. So,   high-fidelity neural reconstruction is  becoming more and more of a possibility today.    Loving it. We will be able to create  a digital version of any place on this   planet and play with our friends there.   How cool is that? What a time to be alive!

### [3:00](https://www.youtube.com/watch?v=PncTPQJDUDs&t=180s) Animation

Okay, so generating speech is a possibility, and  creating amazing virtual spaces is a possibility   too. However, what about animation? Here is  an amazingly fun paper that I would love to   show to you. And it is about simulating locomotion  for virtual gibbons. This is an AI-based method   which can jump from vine to vine. Wait a second,  this can’t do anything. Well, yes, but this is   just the initial part of the training. It doesn’t  quite know how to do this yet. Note that to the   best of my knowledge, some of this is footage  that is exclusive to us, so you can only see it   here on Two Minute Papers. Nowhere else. But this  doesn’t seem to work too well, does it. However,   this is a SIGGRAPH paper, so you Fellow Scholars  immediately know that this has to be real good.    So, you know what? Let’s wait an hour for it  to learn. Now, an hour later. My goodness,   look at that! It can now jump from vine to vine,  and hold on to your papers, because it gets even   better. Look. It now understands that if it gets  a couple more swings before jumping, it can even   do this. Wow. That is incredible. Make sure to  have a look at the paper in the video description,   it is a ton of fun. And I would like to send a  huge thank you to the scientists at the University   of British Columbia for also publishing the source  code of this project for all of us for free. So, as you see, AI research is progressing  so incredibly fast I may have to showcase   these works not one, but three per video to  be able to keep up. What do you think? Did   you enjoy this? Let me know in the comments  below. So for now, let the experiments begin! Thanks for watching and for your generous  support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13134*