# Neural Material Synthesis, This Time On Steroids

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=UkWnExEFADI
- **Дата:** 03.10.2018
- **Длительность:** 2:22
- **Просмотры:** 23,376
- **Источник:** https://ekstraktznaniy.ru/video/14408

## Описание

The paper "Single-Image SVBRDF Capture with a Rendering-Aware Deep Network" is available here:
https://team.inria.fr/graphdeco/fr/projects/deep-materials/

Recommended for you - Neural Material Synthesis: https://www.youtube.com/watch?v=XpwW3glj2T8

Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga.
https://www.patreon.com/TwoMinutePapers

Crypto an

## Транскрипт

### Segment 1 (00:00 - 02:00) []

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. With this technique, we can take a photograph of a desired material, and use a neural network to create a digital material model that matches it that we can use in computer games and animation movies. We can import real world materials in our virtual worlds, if you will. Typically, to do this, an earlier work required two photographs, one with flash, and one without to get enough information about the reflectance properties of the material. Then, a followup AI paper was able to do this from only one image. It doesn't even need to turn the camera around the material to see how it handles reflections, but can learn all of these material properties from only one image. Isn't that miraculous? We talked about this work in more detail in Two Minute Papers episode 88, that was about two years ago, I put a link to it in the video description. Let's look at some results with this new technique! Here you see the photos of the input materials and on the right, the reconstructed material. Please note that this reconstruction means that the neural network predicts the physical properties of the material, which are then passed to a light simulation program. So on the left, you see reality, and on the right, the prediction plus simulation results under a moving point light. It works like magic. Love it. As you see in the comparisons here, it produces results that are closer to the ground truth than previous techniques. So what is the difference? This method is designed in a way that enables us to create a larger training set for more accurate results. As you know, with learning algorithms, we are always looking for more and more training data. Also, it uses two neural networks instead of one, where one of them looks at local nearby features in the input, and the other one runs in parallel and ensures that the material that is created is also globally correct. Note that there are some highly scattering materials that this method doesn't support, for example, fabrics or human skin. But since producing these materials in a digital world takes quite a bit of time and expertise, this will be a godsend for the video games and animation movies of the future. Thanks for watching and for your generous support, and I'll see you next time!
