❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers
📝 The showcased papers are available here:
https://research.nvidia.com/publication/2021-07_rearchitecting-spatiotemporal-resampling-production
https://research.nvidia.com/publication/2022-07_generalized-resampled-importance-sampling-foundations-restir
https://graphics.cs.utah.edu/research/projects/gris/
https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/
Link to the talk at GTC: https://www.nvidia.com/en-us/on-demand/session/gtcfall22-a41171/
If you wish to learn more about light transport, I have a course that is free for everyone, no strings attached:
https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/
❤️ Watch these videos in early access on our Patreon page or join us here on YouTube:
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers
Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu
Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/
Оглавление (4 сегментов)
Segment 1 (00:00 - 05:00)
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Or not quite. To be more exact, I have had the honor to hold a talk here at GTC, and today we are going to marvel at a seemingly impossible problem, and 4 miracle papers from scientists at NVIDIA. Why these 4? Could these papers solve the impossible? Well, we shall see together in a moment. If we wish to create a truly gorgeous photorealistic scene, in computer graphics, we usually reach out to a light transport simulation algorithm, and then, this happens. Oh yes, concept number one. Noise! This is not photorealistic at all, not yet anyway. Why is that? Well, during this process, we have to shoot millions and millions of light rays into the scene to estimate how much light is bouncing around, and before we have simulated enough rays, the inaccuracies in our estimations show up as noise in these images. This clears up over time, but it may take from minutes to days for this to happen, even for a smaller scene. For instance, this one took us 3 full weeks to finish. 3 weeks! Yes, really. I am not kidding. Ouch. Solving this problem in real time seems absolutely impossible, which has been the consensus in the light transport research community for a long while. So much so, that at the most prestigious computer graphics conference, SIGGRAPH, there was even a course by the name: Ray tracing is the future and ever will be. This was a bit of a wordplay yes, but I hope you now have a feel of how impossible this problem feels. When I was starting out as a first year PhD student, I was wondering whether real-time light transport will be a possibility within my lifetime. It was such an outrageous idea, I usually avoided even bringing up the question in conversation. And boy, if only I knew what we are going to be talking about today. Wow. So, we are still at the point where these images take from hours to weeks to finish. And now, I have good news and bad news. Let’s go with the good news first. If you overhear some light transport researchers talking, this is why you hear the phrase “importance sampling” a great deal. This means to choose where to shoot these rays in the scene. For instance, you see one of those smart algorithms here, called Metropolis Light Transport. This is one of my favorites. It typically allocates these rays much smarter than previous techniques, especially on difficult scenes. But, let’s go even smarter! This is my other favorite Wenzel Jakob’s Manifold Exploration paper at work here. This algorithm is absolutely incredible, and the way it develops an image over time is one of the most beautiful sights in of all light transport research. So if we understand correctly, the more complex these algorithms are, the smarter they can get, however, at the same time, due to their complexity, they cannot be implemented so well on the graphics card. That is a big bummer. So, what do we do? Do we use a simpler algorithm and take advantage of the ever-improving graphics cards in our machines, or write something smarter and miss out on all of that? So now, I can’t believe I am saying this, but let’s see how NVIDIA solved the impossible through 4 amazing papers. And that is, how they created real-time algorithms for light transport. Paper number one. Voxel Cone Tracing. Oh my, this is an iconic paper that was one of the first signs of something bigger to come. Now, hold on to your papers, and look at this. Oh my goodness. That is a beautiful, real-time light transport simulation program. And it gets better, because this one paper that is from 2011, a more than 10-year-old paper, and it could do all this. Wow! How is that even possible? We just discussed that we’d be lucky to have this in our lifetimes, and it seems that it was already here, 10 years ago! So, what is going on here? Is light transport suddenly solved? Well, not quite. This solves
Segment 2 (05:00 - 10:00)
not the full light transport simulation problem, but it makes it a little simpler. How? Well, it takes two big shortcuts. Shortcut number one: it subdivides the space into voxels, small little boxes, and it runs the light simulation program on this reduced representation. Shortcut number two, it only computes 2 bounces for each light ray for the illumination. That is pretty good, but not nearly as great as a full solution with potentially infinitely many bounces. It also uses tons of memory, so, plenty of things to improve here, but my goodness, if this was not a quantum leap in light transport simulation, I really don’t know what is. This really shows that scientists at NVIDIA are not afraid of completely rethinking existing systems to make them better, and boy, isn’t this a marvelous example of that. And remember, all this in 2011. 2011! More than 10 years ago. Absolutely mind-blowing. And one more thing: this is the culmination of software and hardware working together. Designing them for each other. This would not have been possible without it. But, once again, this is not the full light transport. So, can we be a little more ambitious, and hope for a real-time solution for the full light transport problem? Well, let’s have a look together and find out! And here is where paper number two comes to the rescue. In this newer work of theirs, they presented an amusement park scene that contains a total of over 20 million triangles, and it truly is a sight to behold. So let’s see, and! Oh, goodness! This does not take from minutes to days to compute, each of these images were produced in a matter of milliseconds! Wow. And, it gets better. It can also render this scene with 3. 4 million light sources, and this method can really render not just an image, but an animation of it interactively. What’s more, the more detailed comparisons in the paper reveal that this method is 10 to 100 times faster than previous techniques, and it also maps really well onto our graphics cards. Okay, but what is behind all this wizardry? How is this even possible? Well, the magic behind all this is a smarter allocation of these ray samples that we have to shoot into the scene. For instance, this technique does not forget what we did just a moment ago when we move the camera a little and advance to the next image. Thus, lots of information that is otherwise thrown away can now be reused as we advance the animation. Now note that there are so many papers out there on how to allocate these rays properly, this field is so mature, it truly is a challenge to create something that is just a few percentage points better than previous techniques. It is very hard to make even the tiniest difference. And to be able to create something that is 10 to a 100 times better in this environment? That is insanity. And, this proper ray allocation has one more advantage. What is that? Well, have a look at this. Imagine that you are a good painter, and you are given this image. Now your task is to finish it. Do you know what this depicts? Hmm…maybe. But knowing all the details of this image is out of question. Now, look, we don’t have to live with these noisy images, we have denoising algorithms tailored for light simulations. This one does some serious legwork with this noisy input, but even this one cannot possibly know exactly what is going on because there is so much information missing from the noisy input. And now, if you have been holding on to your papers so far, squeeze that paper, because, look. This technique can produce this image in the same amount of time. Now we’re talking! Now, let’s give it to the denoising algorithm, and…yes! We get a much sharper, more detailed output. Actually, let’s compare it to the clean reference image. Yes, yes yes! This is much closer. This really blows my mind. We are now, one step closer to proper, interactive light transport!
Segment 3 (10:00 - 15:00)
Now note that I used the word interactively twice here. I did not say real time. And that is not by mistake. These techniques are absolutely fantastic, one of the bigger leaps in light transport research, but, they still cost a little more than what production systems can shoulder. They are not quite real time yet. And, I hope you know what’s coming. Oh yes! Paper number three. Check this out! This is their more recent result on the Paris Opera House scene, which is quite detailed, there is a ton going on here. And, you are all experienced Fellow Scholars now, so when you see them flicking between the raw, noisy and the denoised results, you now know exactly what is going on. And, hold on to your papers, because all this takes about 12 milliseconds per frame. That is over 80 frames per second. Yes yes! My goodness! That is finally in the real time domain and then some! What a time to be alive! Okay, so where is the catch? Our keen eyes see that this is a static scene. It probably can’t deal with dynamic movements and rapid changes in lighting, can it? Well, let’s have a look. Wow! I cannot believe my eyes. Dynamic movement, checkmark. And here, this is as much change in the lighting as we would ever want, and it can do this too. And, we are still not done yet. At this point, real time is fantastic, I cannot overstate how huge of an achievement that is. However, we need a little more to make sure that the technique works on a wide variety of practical cases. For instance, look here. Oh yes! That is a ton of noise, and it’s not only noise, it is the worst kind of noise! High-frequency noise. The bane of our existence. What does that mean? It means these bright fireflies. If you show that to a light transport researcher, they will scream and run away. Why is that? It is because these are light paths that are difficult to get to, and hence, take a ton more ray samples to clean up. And, you know what is coming! Of course, here is paper number four. Let’s see what it can do for us. Am I seeing correctly? That is so much better! This seems nearly hopeless to clean up in a reasonable amount of time, and this…this might be ready to go as is with a good noise filter. How cool is that! Now, talking about difficult light paths, let’s have a look at this beautiful caustics pattern here. Do you see it? Well, of course you don’t! This region is so undersampled, we not only can’t see it, it is hard to even imagine what should be there. So, let’s see if this new method can accelerate progress in this region. That is not true. That just cannot be true. When I first saw this paper, I could not believe this, and I had to recheck the results over and over again. This is at the very least a 100 times more developed caustic pattern. Once again, with a good noise filter, probably ready to go as is. I absolutely love this one. Now, note that there are still shortcomings. None of these techniques are perfect. Artifacts can still appear here and there, and around specular and glossy reflections, things are still not as clear as the reference simulation. However, we now have real time light transport and not only that, but the direction we are heading to is truly incredible, and amazing new papers are popping up what feels like every single month. Don’t forget to apply The First Law Of Papers, which says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. Also, NVIDIA is amazing at democratizing these tools and putting them into the hands of everyone. Their tech transfer track record is excellent. For instance, their marbles demo is already out there, and not many know that they already have a denoising technique that is online and ready to use for all of us. This one is a professional grade tool right there. Many of the papers that you have heard about
Segment 4 (15:00 - 16:00)
today may see the same fate. So, real-time light transport for all of us? Sign me up right now! Thanks for watching and for your generous support, and I'll see you next time!