NVIDIA’s Ray Tracer - Finally, Real Time! ☀️
7:46

NVIDIA’s Ray Tracer - Finally, Real Time! ☀️

Two Minute Papers 03.08.2022 324 423 просмотров 13 059 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Rearchitecting Spatiotemporal Resampling for Production" is available here: https://research.nvidia.com/publication/2021-07_Rearchitecting-Spatiotemporal-Resampling 📝 Our paper with the spheres scene that took 3 weeks is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ The denoiser: https://developer.nvidia.com/nvidia-rt-denoiser ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA

Оглавление (5 сегментов)

Introduction

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. I can’t believe that I am saying but, but today   we are going to see how NVIDIA is getting  closer and closer to solving an almost   impossible problem. And that is, writing  real-time light transport simulations. So, what is that? And why is it nearly impossible?   Well, if we wish to create a truly photorealistic   scene, in computer graphics, we usually reach  out to a light transport simulation algorithm,   and then, this happens. Oh yes, concept number  one. Noise! This is not photorealistic at all,   not yet anyway. Why is that? Well, during  this process, we have to shoot millions

The Problem

and millions of light rays into the scene to  estimate how much light is bouncing around,   and before we have simulated enough rays, the  inaccuracies in our estimations show up as noise   in these images. This clears up over time, but it  may take from minutes to days for this to happen,   even for a smaller scene. For instance, this one  took us 3 full weeks to finish. 3 weeks! Ouch.   Now, earlier, we talked about this technique which  could take complex geometry, and 3. 4 million light   sources, and it could really render not just  an image, but an animation of it interactively. But how? Well, the magic behind all this is  a smarter allocation of these ray samples   that we have to shoot into the scene. For  instance, this technique does not forget what   we did just a moment ago when we move the camera  a little and advance to the next image. Thus,   lots of information that is otherwise thrown away  can now be reused as we advance the animation. And, note that even then, these smooth,  beautiful images are not what we get directly,   if we look under the hood and look at the  raw result that comes out of a simulation, we   get something like this. Oh yes.   Still, a noisy image. But wait,   don’t despair! We don’t have to live with these  noisy images, we have denoising algorithms   tailored for light simulations. This one does  some serious legwork with this noisy input. And, in a followup paper, they also went on to  tackle these notoriously difficult photorealistic   smoke plumes, volumetric bunnies, and even  explosions interactively. The results were once   again, noise filtered to nearly perfection. Not to  perfection, but a step closer to it than before. Now note that I used the word interactively  twice here. I did not say real time.    And that is not by mistake. These  techniques are absolutely fantastic,   one of the bigger leaps in light  transport research, but, they still   cost a touch more than what production systems  can shoulder. They are not quite real time yet. So, what did they do? Did they stop there? Well,

Real Time

of course not, they rolled up the  sleeves and continued. And now,   I hope you know what’s coming. Oh yes! Have a  look at this newer paper they have in this area. This is their result on the Paris Opera House  scene, which is quite detailed, there is a   ton going on here. And, you are all experienced  Fellow Scholars now, so when you see them flicking   between the raw, noisy and the denoised results,  you now know exactly what is going on. And, hold   on to your papers, because all this takes about 12  milliseconds per frame. Yes yes! My goodness!    That is finally in the real time domain  and then some! What a time to be alive! Okay, so where is the catch? Our keen eyes see  that this is a static scene. It probably can’t   deal with dynamic movements and rapid changes  in lighting, can it? Well, let’s have a look.    Wow! I cannot believe my eyes. Dynamic movement,  checkmark. And here, this is as much changing in

Dynamic Movement

the lighting as anyone would ever want, and  it can do this too. I absolutely love it. And, remember the amusement park scene  from the previous paper? The one with 23   million triangles for the geometry,  and over 3 million light sources?   Well, here are the raw results, and after  denoising, this looks super clean. Wow.

Conclusion

So, how long do we have to wait for this?   This can’t be real time, right? Well,   all this takes is about 12 milliseconds  per frame. Again. And this is where I fell   off the chair when reading this paper. Of  course, not even this technique is perfect,   the glossy reflections are a little blurry at  places, and artifacts in the lighting can still   appear. But if this is not a quantum leap in  light transport research, I don’t know what is. Plus, if you wish to see some properly detailed  comparisons against previous techniques,   make sure to have a look at the paper. And,  if you have been holding on to your papers,   now, squeeze that paper, because  everything that you see in this   paper was done by two people. Huge  congratulations Chris and Alexey! And, if you are wondering if we  ever get to use these techniques,   don’t forget that their marbles demo is  already out there for all of us to use.    And it gets better, for instance, not many know  that they already have a denoising technique that   is online and ready to use for all of us. This  one is a professional grade tool right there. This   is really incredible, they have so many tools  out there for us to use, I check what NVIDIA   is up to daily, and I still quite often get  surprised about how much they have going on. So, finally, real time light  transport in our lifetimes?    Oh yes, this paper is history in the making. Thanks for watching and for your generous  support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник