❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers
📝 The paper "Real-Time Neural Appearance Models" is available here:
https://research.nvidia.com/labs/rtr/neural_appearance_models/
📝 My PhD thesis "Photorealistic Material Learning and Synthesis" is available here:
https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-learning-and-synthesis/
My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD
Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Gaston Ingaramo, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers
Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu
Károly Zsolnai-Fehér's research works: https://cg.tuwien.ac.at/~zsolnai/
#nvidia
Оглавление (2 сегментов)
Segment 1 (00:00 - 05:00)
Wow, I am really surprised by this paper. You see, we know from an earlier paper that in virtual worlds, glittery materials like this, and brushed aluminum surfaces like this can be rendered with ray tracing based techniques. Light simulations on a computer, if you will. That is wonderful, but I am not surprised by that. What I am surprised about is this paper. This one does something truly incredible. Hold on to your papers Fellow Scholars because here is a simulated ceramic material with multiple layers, with fingerprints, with dust, with smudges, oh my goodness, I am going to pass out. Look at that! It can also simulate the glitter on the plastic handle here, and the metal part of the cheese slicer is simulated down to the tiniest little scratches of the surface. Absolutely loving it. Note that these are all computer simulations of virtual worlds. Not real photographs. This is beauty that only a proper computer graphics research paper can offer. Unmatched. And for the teapot, the metal handle was also something else. When I started my PhD in light transport simulation in 2013, I did not think that we will be able to do this kind of graphics in my lifetime, and about 10 years later, here it is. I can’t believe it. So, with this technique, we can capture real materials, and layer by layer, put them into a computer program where we can simulate them…well, how quickly can we simulate them? I bet we have to wait for hours for every single image, right? Well, not quite. Let me explain. This is a neural network-based technique, where we take a stupendously large amount of material reflectance data, for 40 billion different training samples. And all this takes…wait, what? 4-5 hours per material to train on one commodity graphics card? How is that even possible? Well, let’s pop the hood and take a closer look. Oh yes. The neural network is tiny, very lightweight presumably for quick training and running, we will have a look at that in a moment. However, it is compressing an enormous amount of data, that is, 2. 5 billion latent parameters. And after the training is done, we can use these materials quickly for as long as we wish. Now, in summary, the neural network that is behind this is a tiny, lightweight encoder that understands and compresses this enormous amount of reflectance information down into something smaller, more manageable. So, we have two more important questions left. One, is it as good as a true, reference simulation? I can’t wait, let’s have a look together. This is what it looks like compared to the reference simulation. Now, have a look here. You see, this difference does not mean that the new technique is less sparkly, it’s not worse. It is actually better, because it has less noise, something that is inherent to real light transport simulations. Ray tracers, if you will. You can witness it on a simple diffuse surface too which is not meant to be sparkly at all. Wow, this means that it is nearly indistinguishable from the true reference simulation, and is even faster. So, how much faster? Well, get this, it is 2-10 times faster than the reference solution, which is incredible. So, question number two. What does this speed look like? Well, look! When we move the camera around in real time, we get a really noisy image, but, look! As we stop, we have more time to simulate more rays to clean up this image, which happens almost immediately. There is still noise in these images, but don’t forget, we have many powerful noise filtering techniques that are specifically tailored for ray tracing algorithms, so I am very hopeful that one or two more papers down the line, and we will be able to do this in real time. Yes, that’s right. I didn’t think this would be possible in my lifetime, and through the power of computer graphics, AI research, and human ingenuity, it is not only possible, but it will be possible
Segment 2 (05:00 - 06:00)
in real time. Pinch me. What a time to be alive! This was Two Minute Papers with Dr. Károly Zsolnai-Fehér. Subscribe if you wish to see more.