# NVIDIA’s New AI: Video Game Graphics, Now 60x Smaller!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=aQctoORQwLE
- **Дата:** 15.12.2022
- **Длительность:** 6:48
- **Просмотры:** 282,659
- **Источник:** https://ekstraktznaniy.ru/video/13356

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 
❤️ Their mentioned post is available here: http://wandb.me/variable-bitrate

📝 The paper "Variable Bitrate Neural Fields" is available here:
https://nv-tlabs.github.io/vqad/

I have been trying Mastodon - not sure how to do this yet, but here goes: https://sigmoid.social/@twominutepapers

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Edward Unthank, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, 

## Транскрипт

### Segment 1 (00:00 - 05:00) []

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. When creating a video game, an animated movie,  or to create any believable virtual world,   among many other things, we need geometry.   Tons and tons of geometry. Both to store,   and to render images of our  characters and the environment. And having lots of geometry  means a big headache. You see,   we can use traditional methods that store  all the data needed for these objects,   and they also render super fast. But, there is a  problem. What is the problem? Well, size! These   can take gigabytes that will eat not only your  storage but your cellular data plan as well. Now, do not despair, modern learning-based  methods are coming to the rescue in the   form of NERFs. That is, Neural Radiance Fields.   Here, we don’t need to store all the geometry,   just a few photos, and miraculously  they can fill in all the missing data,   so … can that really be? Can we fly through  these photos? Yes we can! This is amazing. Now, it gets better, these are instant NERFs.   These can be trained in a matter of minutes,   sometimes seconds, and rendered also in  seconds. And, look, we are light transport   researchers over here, so our keen eyes see  that specular and refractive objects work   really well too. That is fantastic, and folks  at Luma AI have already made an app that can   create these NERFs right on your phone. And  some of these seem either production quality,   or really close to it. Yes, that’s  right, all this comes from a paper   that was published just two years ago. This  is incredible progress in just two years. But wait. So, these NERF representations are  much smaller than traditional techniques,   however, they still put some strain on  your cellular data plan. And, don’t forget,   they require considerable  horsepower to render quickly. But, good news, neural networks are  excellent at compression. For instance,   NVIDIA already has a neural network that can  help us compress even ourselves! You see,   in this earlier work, the technique takes only  the first image from our video, and it throws away   the entire video afterwards! But before that,  it stores a tiny bit of information from it,   which is, data on how our head is moving  over time, and how our expressions change.    That is an absolutely outrageous idea,  except the fact that it works. With this,   our video data for a video call is thus  at least as detailed as the traditional,   state of the art techniques, but uses  a third as much data, or even less! And, get this in this new paper, scientists at  NVIDIA claim to solve both of our NERFs problems   at the same time! Compact and fast NERFs at  the same time! Now I will believe that when   I see it. Can they put the incredible compression  capabilities of neural networks to more use here?    Let’s see. Whoa, look at that! You see a previous  NERF-based method, and here, the new method. And   can that really be? Am I seeing correctly  or am I dreaming? This new method requires   60 times less data to show us this geometry.   And the quality, as described by the signal to   noise ratio is nearly the same. Not the same,  but very close. And 60x cheaper. My goodness. And it gets even better. That was just  the compression part. What about the   fast rendering part? Well, these models  are so small, we can start downloading   multiple versions of them at the same time,  and the coarsest version can be shown to us   after receiving only the first 10 kilobytes  of data. That is equivalent to downloading   less than one second of music. Wow. And then,  as more data arrives, the finer details start   to appear over time. And we are through the  whole process very, very quickly. Incredible. But wait, we are experienced Fellow Scholars over  here, so we know that there are other papers on   this topic. For instance, what about Plenoxels?   How does it compare to that? This is Plenoxels,   which is an amazing paper that we talked  about earlier in this series. By the   way, interestingly, this earlier method did not  use neural networks to achieve what you see here,

### Segment 2 (05:00 - 06:00) [5:00]

which is incredible. And now, look at that! This  new one, look, it can go at least as far with 20   megabytes of data as Plenoxels can go with over a  150. And Plenoxels is not just some ancient paper,   it is from just one year ago. Seeing this  kind of improvement just one year and one   more paper down the line, that is the  power of human ingenuity. I love it. And you see, this might be the future of imagery.   We don’t store the whole geometry of the objects   anymore, we just take a few photos, and let the AI  fill in the rest. Bravo! What a time to be alive! Thanks for watching and for your generous  support, and I'll see you next time!
