# Intel’s New AI: Amazing Ray Tracing Results! ☀️

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=VxbTiuabW0k
- **Дата:** 22.10.2022
- **Длительность:** 6:41
- **Просмотры:** 125,741
- **Источник:** https://ekstraktznaniy.ru/video/13411

## Описание

❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum

📝 The paper "Temporally Stable Real-Time Joint Neural Denoising and Supersampling" is available here:
https://www.intel.com/content/www/us/en/developer/articles/technical/temporally-stable-denoising-and-supersampling.html

📝 Our earlier paper with the spheres scene that took 3 weeks:
https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin At

## Транскрипт

### Segment 1 (00:00 - 05:00) []

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. This is my happy episode. Why is that? Well, of  course, because today we are talking about light   transport simulations, and in particular, Intel’s  amazing new technique, that can take this, and   make it into…this! Wow. It can also take this, and  make it into…this. My goodness, this is amazing. But wait a second, what is going on here?   What are these noisy videos for, and why? Well, if we wish to create a truly gorgeous  photorealistic scene, in computer graphics,   we usually reach out to a light transport  simulation algorithm, and then, this happens.    Oh no! We have noise. Tons of it. But why? Well,  during the simulation, we have to shoot millions   and millions of light rays into the scene to  estimate how much light is bouncing around,   and before we have simulated enough rays, the  inaccuracies in our estimations show up as noise   in these images. This clears up over time, but it  may take a long time. How do we know that? Well,   have a look at the reference simulation footage  for this paper. See? There is still some noise in   here. I am sure this should clean up over time,  but no one said that it would do so quickly. A   video like this might require hours to days  to compute. For instance, this is from our   previous paper that took 3 weeks to finish and  it ran on multiple computers at the same time. So, is all hope lost for these beautiful  photorealistic simulations? Well,   not quite! Instead of waiting for hours or  days, what if I told you that we can just   wait for a small fraction of a second, about 10  milliseconds. It will produce this. And then,   run a previous noise filtering technique  that is specifically tailored for light   transport simulations, and what  do we get? Probably not much,   right? I can barely tell what I should be seeing  here. So, let’s see a previous method. Whoa! That   is way better. I was barely able to guess what  these are, but now we know: gratings. Great! So, we don’t have to wait for hours  to days for a simulated world to   come alive in a video like this, just a few  milliseconds. At least for the simulation,   we don’t know how long the noise filtering takes. And now, hold on to your papers, because this  was not today’s paper’s result. I hope this one   can do even better. And, look, instead, it can do  this. Wow. This is so much better! And the result   of the reference simulation for comparison,  this is the one that takes forever to compute. Let’s also have a look at the videos and compare  them. This is the noisy input simulation, wow,   this is going to be hard. Now, the previous  method. Yes, this is clearly better, but there   is a problem. Do you see the problem? Oh yes,  it smoothed out the noise, but it smoothed the   details too. Hence, a lot of them are lost. So,  let’s see what Intel’s new method can do instead.    Now we’re talking! So much better. I  absolutely love it. It is still not as   sharp as the reference simulation, however,  in some regions, depending on your taste,   it might even be more pleasing  to the eye than this reference. And it gets better! This technique does not only  denoising, but upsampling too. This means that   it is able to create a higher resolution image  with more pixels than the input footage. Now,   get ready, one more comparison and I’ll  tell you how long the noise filtering took. Whoa, I wonder what it will do with this noisy  mess. I have no idea what is going on here.    And neither does this previous technique.   And this is not some ancient technique,   this previous method is the Neural Bilateral  Grid, a learning-based method from just two   years ago. And now, have a look at this.   My goodness! Is this really possible? So   much progress just one more paper down  the line! I absolutely love it. So good!

### Segment 2 (05:00 - 06:00) [5:00]

So, how long do we have to wait for an  image like this? Still hours to days? Well,   not at all. This all runs not only in real  time, it runs faster than real time! Yes,   that means about 200 frames per second for  the new noise filtering step. And remember,   the light simulation part typically takes 4-12  milliseconds on these scenes, this is the noisy   mess that we get. And just 5 milliseconds later,  we get this. I cannot believe it. Bravo! So,   real-time light transport simulations  from now on? Oh yes, sign me up right   now! What a time to be alive! So, what do  you think? Let me know in the comments below! Thanks for watching and for your generous  support, and I'll see you next time!
