# NVIDIA’s DLSS 3.5: This Should Be Impossible!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=hr85Lc_WT38
- **Дата:** 19.09.2023
- **Длительность:** 8:28
- **Просмотры:** 389,252

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.me/2mp

📝 My paper on neural rendering:
https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/

My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

Digital trends on DLSS 3: https://www.digitaltrends.com/computing/why-i-leave-dlss-3-off-in-games/

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Gaston Ingaramo, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Chapters:
0:00 What is DLSS?
0:58 Neural rendering
1:50 DLSS enters the scene
2:39 Step 1: Super resolution
2:56 Step 2: Optical flow
3:41 Ray Reconstruction
5:02 Results
6:09 Who gets this?
6:59 Not perfect
7:36 First Law of Papers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

#nvidia #dlss3

## Содержание

### [0:00](https://www.youtube.com/watch?v=hr85Lc_WT38) What is DLSS?

you hear it everywhere dlss this dlss that but what is it and why is it doing the impossible this AI based technology is being talked about in the media a great deal and it is supposedly a way of dramatically speeding up computer games and Virtual Worlds so does this really work well when Jen hang Nvidia CEO says that every single Pixel on your screen for these games will not be rendered but will be generated soon presumably by an AI that really got my attention that sounds insane and if hearing this you think that this cannot possibly be true I don't blame you for a second we can't just create all this footage there has to be a proper system that can compute Reflections defuse interactions and all that right well not quite in 2018 I had

### [0:58](https://www.youtube.com/watch?v=hr85Lc_WT38&t=58s) Neural rendering

a little project where I wanted to generate photorealistic images of material models quickly now generating an image through Ray tracing took up to 40 to 60 seconds and that was a bit too long for my taste so I tried to write a neuro renderer that could take just the description of a material model and it would immediately in real time synthesize what it would look like and it was able ble to do all this not in 60 seconds but in 3 milliseconds yes it was 20,000 times faster however it was limited to this very scene so how did this research field improve in the last 5 years do we have anything more usable

### [1:50](https://www.youtube.com/watch?v=hr85Lc_WT38&t=110s) DLSS enters the scene

now oh boy if only I could tell you well meet dlss deep learning super sampling this is a system where hardware and software works together to create an incredible experience where games and Virtual Worlds really run faster than what should be possible so how well as of D LSS version 3 get this by generating seven out of every eight pixel that appears on the screen yes more than 85% of the pixels are just generated not computed by for in Ray tracing or traditional techniques yes that sounds flat out impossible but it is possible the first step is running

### [2:39](https://www.youtube.com/watch?v=hr85Lc_WT38&t=159s) Step 1: Super resolution

super resolution in real time this means that a course image goes in and an AI finds out if we pretended that this is a fine image instead what are the details that are missing and it synthesizes those details second Optical flow we can

### [2:56](https://www.youtube.com/watch?v=hr85Lc_WT38&t=176s) Step 2: Optical flow

look at two adjacent frames of the video game and try to estimate what has happened between them and where things are going with this entire intermediate frames can be synthesized creating the illusion as if the game was running smoother than our Hardware can run it combining the two together with a pool of very few pixels that are computed both image quality and smoothness can be improved at the same time what a time to be alive but it does not stop there it gets better they have also announced D LSS 3. 5 so hopefully all this can be improved even further through something

### [3:41](https://www.youtube.com/watch?v=hr85Lc_WT38&t=221s) Ray Reconstruction

that they call Ray reconstruction oh my goodness this is going to be so good but how well normally when we perform R tracing we simulate the path of millions and millions of rays and bounce them around in the scene and if we can't afford to wait for up to hours at a time for an image we will have to settle for a really noisy image like this the noising techniques exist that try to guess what the missing information could be but they are not perfect not even close some important details between objects can be missed entirely and still Reflections can be a lot less clear than in a full simulation and unfortunately we have a problem it gets even worse this image will undergo an upscaling step where these errors get magnified even more the noising and upscaling are two separate steps typically done by two separate models so why not do both of them with the same AI model and now Ray reconstruction enters the scene it learned on a ton more information than the previous models five times more training dat was given to it and look at

### [5:02](https://www.youtube.com/watch?v=hr85Lc_WT38&t=302s) Results

this step this one is right on the money this is tailored to retain high frequency information find details to make sure that before the upscaling step we have the highest quality information available so is it better than what the noers did before oh my goodness look at that we finally got that shadowy region back that was filled in by incorrect information with the denoisers before yummy so good and that high frequency information retaining capability can also be witnessed here loving it now note that ideally here we talk about peer-reviewed research papers where we can see all the weak points and failure cases that is my home turf this is not a research paper so it is harder for me to find the flaws so bear in mind that it may have weaknesses that were not shown here and in the meantime this technology is being handed out to millions and millions of people all around the world and it is incredible so this will be

### [6:09](https://www.youtube.com/watch?v=hr85Lc_WT38&t=369s) Who gets this?

available only for the shiniest newest graphics card owners right well hold on to your papers fellow Scholars because here comes the best part no this comes to all RTX graphics cards even the older ones that you can pick up for a couple hundred bucks this can breathe new life into aging Hardware which is highly appreciated note that the particular game or app you're running has to be ready for dlss 3. 5 for all this magic to happen now I am a light transport researcher by trade I dream with rays of light and Ray tracing if you will so I am incredibly happy to see this technology make it into the hands of real users in the real world in such an incredible way and make

### [6:59](https://www.youtube.com/watch?v=hr85Lc_WT38&t=419s) Not perfect

no mistake this is not perfect by any means there are games where it does not appear to work very well and some people prefer to not use it at all anywhere however note that this is still nent technology and it can already synthesize more than 85% of these frames which is something that I thought would only be possible in a science fiction movie maybe not even there I am absolutely stunned by these results even with their weaknesses and don't forget to invoke the first law of papers the

### [7:36](https://www.youtube.com/watch?v=hr85Lc_WT38&t=456s) First Law of Papers

to invoke the first law of papers says that research is a process do not look at where we are will be two more papers down the line weights and biases provides tools to track your experiments in your deep learning projects their system is designed to save you a ton of time and money and it is actively used in projects at prestigious Labs such as open AI Toyota research GitHub and more and the best part is that weights and biases is free for all individuals academics and open source projects it really is as good as it gets make sure to visit them through wnb me/ 2mp or just click the link in the video description and you can get a free demo today our thanks to weights and biases for helping us make better videos for you

---
*Источник: https://ekstraktznaniy.ru/video/13032*