# Differentiable Rendering is Amazing!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=tGJ4tEwhgo8
- **Дата:** 14.12.2019
- **Длительность:** 4:56
- **Просмотры:** 125,486

## Описание

❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers 

Their mentioned blog post is available here:https://www.wandb.com/articles/p-picking-a-machine-learning-model 

📝 The paper "Reparameterizing discontinuous integrands for differentiable rendering" is available here:
https://rgl.epfl.ch/publications/Loubet2019Reparameterizing

📝 Our "Gaussian Material Synthesis" paper is available here:
https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/

The free Rendering course on light transport is available here:
https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh.
https://www.patreon.com/TwoMinutePapers

Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

#neuralrendering

## Содержание

### [0:00](https://www.youtube.com/watch?v=tGJ4tEwhgo8) Segment 1 (00:00 - 04:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. This beautiful scene is from our paper by the name Gaussian Material Synthesis, and it contains more than a 100 different materials, each of which has been learned and synthesized by an AI. None of these daisies and dandelions are alike, each of them have a different material model. Normally, to obtain results like this, an artist has to engage in a direct interaction with an interface that you see here. This contains a ton of parameters and to be able to use it properly the artist needs to have years of experience in photorealistic rendering and material modeling. But, unfortunately, the problem gets even worse. Since a proper light simulation program needs to create an image with the new material parameters, this initially results in a noisy image that typically takes 40 to 60 seconds to clear up. We have to wait out this 40 to 60 seconds for every single parameter change that we make. This would take several hours in practical use cases. The goal of this project was to speed up workflows like this by teaching an AI the concept of material models such as metals, minerals and translucent materials. With our technique, first, we show the user a gallery of random materials, who assigns a score to each of them saying that I liked this one, I didn't like that one, and get an AI to learn our preferences and recommend new materials for us. We also created a neural renderer that replaces the light simulation program and creates a near-perfect image of the output in about 4 milliseconds. That’s not real time, that’s ten times faster than real time! That is very fast and accurate. However, our neural renderer is limited to this scene that you see here. So, the question is, is it possible to create something that is a little more general? Well, let’s have a look at this new work that performs something similar that they call differentiable rendering. The problem formulation is the following: we specify a target image that is either rendered by a computer program, or even better, a photo. The input is a pitiful approximation of it, and now, hold on to your papers, because it progressively changes the input materials, textures, and even the geometry to match this photo. My goodness, even the geometry! This thing is doing three people’s jobs when given a target photo. And you haven’t seen the best part here, because there is an animation that shows how the input evolves over time as we run this technique. As we start out, it almost immediately matches the material properties and the base shape, and after that, it refines the geometry to make sure that the more intricate details are also matched properly. As always, some limitations apply, for instance, area light sources are fine, but it doesn’t support point light sources, may show problems in the presence of discontinuities and mirror-like materials. I cannot wait to see where this ends up a couple papers down the line and I really hope this thing takes off - in my opinion, this is one of the most refreshing and exciting ideas in photorealistic rendering research as of late. More differentiable rendering papers please! I would like to stress that there are also other works on differentiable rendering. This is not the first one. However, if you have a closer look at the paper in the description, you will see that it does it better than previous techniques. In this series, I try to make you feel how I feel when I read these papers, and I hope I have managed this time, but you be the judge, please let me know in the comments! And if this got you excited to learn more about light transport, I am holding a Master-level course on it at the Technical University of Vienna. This course used to take place behind closed doors, but I feel that the teachings shouldn't only be available for the 20-30 people who can afford a University education, but they should be available for everyone. So, we recorded the entirety of the course and it is now available for everyone, free of charge. If you are interested, have a look at the video description to watch them. This episode has been supported by Weights & Biases. Weights & Biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. In this tutorial, they show you how to visualize your machine learning models, and how to choose the best one with the help of their tools. Make sure to visit them through wandb. com/papers or just click the link in the video description and you can get a free demo today. Our thanks to Weights & Biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/14208*