❤️ Check out Weights & Biases and take their great course for free: https://wandb.me/papercourse
📝 My paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD
Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers
Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu
Károly Zsolnai-Fehér's links:
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/
#nvidia
Оглавление (2 сегментов)
Segment 1 (00:00 - 05:00)
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to talk about the incredible things NVIDIA showcased at their recent keynote at SIGGRAPH, the world’s premier computer graphics conference. Now, I am a computer graphics researcher by trade, I work in light transport simulations, in other words, ray tracing, so many of the things that you will see here today makes me very, very happy. For instance, this is a showcase of what their Omniverse system can do now. This tool is for building virtual worlds. It can do a lot, from storing volumetric effects, to simulating them, to animating robot hands, materials, animation, high-quality geometry and hair simulations too. The incredible scenes that you see here were built by USD composer, this is an Omniverse app to build large-scale scenes. And now, hold on to your papers Fellow Scholars, because some of these virtual worlds are becoming incredibly convincing. Now what does that even mean? Well, I talked about simulations that are almost indistinguishable from reality in a little paper not so long ago, yes, what you see here is a computer simulation. Not reality. The link to the paper is available in the video description. So, today we ask, can scientists at NVIDIA perform something similar? How good are their virtual worlds? So let’s see what they have here. Number one, OpenUSD. USD stands for universal scene descriptor, and this supports creating virtual worlds in way such that visual effects people, designers, engineers and roboticists can work together easily. For instance, this is a virtual car that is almost the same as the real car it represents and it is assembled from the smallest individual parts. Then, artists can see what the car would look like in reality, create new variants of it, create a gorgeous backdrop where they would showcase this car, try different environments, and it is even possible to take it for a ride. And what really blows my mind is not only that all this is possible, but possible today in real time. Everything runs as we are looking at it. Wow. Number two, now, if this system can create really detailed geometry, that is great. However, not everyone is a CAD designer, not everyone can model, or understand materials as a physicist does, but everyone can speak English. So, they have this new interface where all we need to know is English, and thus, we can now get a 3D floorplan, generate lighting through the free and open-source Blender, and through Adobe Firefly integration, we can get floor textures by just typing. I feel like we are going towards a future where English is going to be the interface of almost everything. You won’t need to be an expert at everything, you will have an expert AI assistant at your fingertips with a ton of knowledge and infinite patience ready to work for you. How amazing is that? And you can simulate not only how things look, but how things work too. Incredible. Number three, they also showed us something that they call Workbench. This enables fine-tuning already existing AI models by giving them additional knowledge. For instance, with a text to image model, if we are asking for a Toy Jensen in outer space, we get this. Wait a minute…that’s not the Jensen we are looking for is it? Now, of course, this is because this model has never heard of Jensen before, so we can teach it about him using these 8 images, then ask again, and…bam! There we go! Into outer space we go! Of course, they have also showcased shiny new hardware. Well, mostly shiny. And finally, my favorite. If I hadn’t seen anything else, just this one thing you are seeing here, that would have been enough for me to create a video about this keynote. This is a video honoring some of the incredible works in computer graphics, back from the olden times, with the incredible animated human hand back from 1972 from the legendary Ed Catmull, co-founder of Pixar, and the first ray tracing program from Turner Whitted from 1979. Imagine simulating refractions on a computer more than 40 years ago. Wow. And up to what we are capable of today. I am so happy to be able to share all this with you, and just imagine what we will
Segment 2 (05:00 - 06:00)
be capable of just two more papers down the line. My goodness. What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!