NVIDIA’s Ray Tracer: Wow, They Nailed It Again! 🤯
8:11

NVIDIA’s Ray Tracer: Wow, They Nailed It Again! 🤯

Two Minute Papers 21.06.2022 413 093 просмотров 12 956 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 NVIDIA's paper "Fast Volume Rendering with Spatiotemporal Reservoir Resampling" is available here: https://dqlin.xyz/pubs/2021-sa-VOR/ https://graphics.cs.utah.edu/research/projects/volumetric-restir/ https://research.nvidia.com/publication/2021-11_Fast-Volume-Rendering 🔆 The free light transport course is available here. You'll love it! https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ Volumetric path tracer by michael0884: https://www.shadertoy.com/view/NtXSR4 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photos/volcanic-eruption-ash-cloud-dramatic-1867439/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to see how NVIDIA’s incredible light transport algorithm can render these notoriously difficult photorealistic smoke plumes, volumetric bunnies, and even explosions interactively. We are going to talk about this amazing paper in a moment, and given that NVIDIA can already render this absolutely beautiful marbles demo in real time, not only with real time light transport, but also with our other favorite around here, real time physics! This is just to say that tech transfer is happening here, the papers are real, the papers that you see here really make it to real applications that everyone can use at home. Please note that they won’t say exactly what tech is under the hood here, but looking at their best light transport papers might be a good starting point of what is to come in the future. Now, back to the intro. What you are seeing here should not be possible at all! Why is that? Well, when we use a light transport simulation algorithm to create such an image, we get a photorealistic image, but as you see, not immediately. Not even close. It is a miracle of science that with this, we can shoot millions and millions of light rays into the scene to estimate how much light is bouncing around, and initially, the inaccuracies in our estimations show up as noise in these images. As we shoot more and more rays, this clears up over time. However, there are two problems. One, this can take hours to finish. And in the case of volumetric light transport, these light rays can bounce, scatter, and get absorbed inside a dense medium, in that case, it takes much, much longer. Not only hours, sometimes, even days. Oh my goodness. So, how did NVIDIA pull this off? Well, they built on a previous technique of theirs that is capable of rendering 3. 4 million light sources, and not just for one image, but it can even handle animation. Just look at all this beautiful footage. Now it would be so great to have a variant of this that works on volumetric effects, such as explosions and smoke plumes, but of course, anyone who has tried it before says that there is no chance that those could run interactively. No chance at all. Well, scientists at NVIDIA beg to differ. Check this out. This is a previous technique and it says 12 spp. That means 12 samples per pixel, which is more or less simulating 12 light rays for each pixel of these images. That is not a lot of information, so let’s have a closer look at what this previous method can do with that. Well, there is still tons of noise, and it flickers a lot when we rotate the camera around. Well, I would say there is not much hope here, we would need hundreds, maybe even thousands of samples per pixel to get a clean image, not 12. And, look, it gets even worse. What? The new technique runs not thousands, not hundreds, and not even 12 samples per pixel in the same amount of time, but…really? Can this be? 1 sample per pixel? Can you do anything meaningful with that? Well, hold on to your papers, and check this out. Wow, look at that! It can do much better with just one ray per pixel than the previous technique can do with 12. That is absolutely incredible. I really don’t know what to say. And it works not just on this example, but on many others as well. This is so far ahead of the previous technique that it seems like science fiction. I absolutely love it. However, yes I can hear you asking, Károly, but these are still noisy, so why get so excited for all these incomplete results? Well, do you remember what we did with the previous NVIDIA light transport paper? We took these noisy inputs, and plugged them into a denoising technique that is specifically designed for light transport algorithms. It tries to guess what is behind the noise. And as you see, these can help a ton. But you know, can they help so much that a still noisy input, a meager 1 sample per pixel

Segment 2 (05:00 - 08:00)

can become usable? Well, let’s have a look. Oh my goodness. Look at that. The result is clearly not perfect, one light ray per pixel can hardly give us a perfect image, but after denoising, this is unreal. We are almost there right away! It has so much less flickering than the previous technique with much more samples, and we are experienced Fellow Scholars around here, so let’s also check the amount of detail in the image. And…whoa! There is no contest here. This technique also pulls off all this wizardry by trying to reuse information that is otherwise typically thrown away. For instance, with no reuse, we get this baseline result, and if we reuse information from a previous frame in the animation, we get this. That is significantly better than the baseline. If we reuse previous rays spatially, that is also an improvement. So are these different kinds of improvements? Well, let’s add them together, and oh yes, now this is what we are here for. Look at how much more information there is in this image. So now, even these amazing volumetric scenes can be rendered interactively? I am out of words. What a time to be alive! And, if you feel inspired by these results, I have a free Master-level course on light transport where we write a full light simulation program from scratch, and learn about physics, the world around us, and more. If you watch it, you will see the world differently. That is free education for everyone, that’s what I want. So, the course is available free of charge for everyone, no strings attached, check it out in the video description. Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник