# Gaussian Material Synthesis (SIGGRAPH 2018)

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=6FzVhIV_t3s
- **Дата:** 14.04.2018
- **Длительность:** 5:32
- **Просмотры:** 62,377
- **Источник:** https://ekstraktznaniy.ru/video/14483

## Описание

In this work, we teach an AI the concept of metallic, translucent materials and more. The paper "Gaussian Material Synthesis" and its source code is available here:
https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 

Acknowledgments:
We would like to thank Robin Marin for the material test scene and Vlad Miller for his help with geometry modeling, Felícia Zsolnai-Fehér for improving the design of many figures, Hiroyuki Sakai, Christian Freude, Johannes Unterguggenberger, Pranav Shyam and Minh Dang for their useful comments, and Silvana Podaras for her help with a previous version of this work. We also thank NVIDIA for providing the GPU used to train our neural networks. This work was partially funded by Austrian Science Fund (FWF), project number P27974. Scene and geometry credits: Gold Bars – JohnsonMartin, Christmas Ornaments – oenvoyage, Banana – sgamusse, Bowl – metalix, Grapes – PickleJones, Glass Fruits – BobReed64, Ice cream – b2przemo, Vases – Technausea, B

## Транскрипт

### Segment 1 (00:00 - 05:00) []

creating high quality photorealistic materials for light transport simulations typically includes direct hands-on interaction with a principal shader this means that the user has to tweak a large number of material properties by hand and has to wait for a new image of it to be rendered after each interaction this requires a fair bit of expertise and the best setups are often obtained through a lengthy trial and error process to enhance this workflow we present a learning-based system for rapid masc material synthesis first the user is presented with the gallery of materials and the assigned scores are shown in the upper left here we learn the concept of glassy and transparent materials by learning on only a few tenths of high-scoring samples our system is able to recommend many new materials from the learn distributions the learning step typically takes a few seconds where the recommendations take negligible time and can be done on a mass scale then these recommendations can be used to populate a scene with materials typically each recommendation takes 40 to 60 seconds to render with global illumination which is clearly unacceptable for real-world workflows even for mid-sized galleries in the next step we propose a convolutional neural network that is able to predict images of these materials that are close to the ones generated via a global illumination and takes less than three milliseconds per image sometimes a recommended material is close to the one envisioned by the user but requires a bit of fine-tuning to this end we embed our high dimensional shader descriptors into an intuitive to the latent space where exploration and adjustments can take place without any domain expertise however this isn't very useful without additional information because the user does not know which regions offer useful material models that are in line with their scores one of our key observations is that this latent space technique can be combined with Gaussian process regression to provide an intuitive color coding of the expected preferences to help highlighting the regions that may be of interest furthermore our convolutional neural network can also provide real-time predictions of these images these predictions are close to indistinguishable from the real rendered image and are generated in real time beyond the preference map this neural network also opens up the possibility of visualizing the expected similarity of these new materials to the one we seek to fine-tune by combining the preference and similarity maps we obtain a color coding that guides the user in this latent space towards materials that are both similar and have a high expected score to accentuate the utility of our real time variant generation technique we show a practical case where one of the great materials is almost done but requires a slight reduction in VV DT this adjustment doesn't require any domain expertise or direct interaction with the material modeling system and can be done in real time in this example we learned the concept of translucent materials from only a handful of high-scoring samples and generate a large amount of recommendations from the learned distribution you these recommendations can then be used to populate a scene with relevant materials here we show the preference and similarity maps of the learned translucent material space and explore possible variants of an input material you these recommendations can be used for masking material synthesis and the amount of variation can be tweaked to suit the users artistic vision after assigning the appropriate materials displacements and other advanced effects can be easily added to these materials we have also experimented with an extended more expressive version of our shader that also includes procedural textured al beetles and displacements the following scenes were populated using the material learning and recommendation and latent space embedding steps we have proposed a system for mass scale material synthesis that is able to rapidly recommend a broad range of new material models after learning the user preferences from a modest number of samples beyond this pipeline we also explored powerful

### Segment 2 (05:00 - 05:00) [5:00]

combinations of the three used learning algorithms thereby opening up the possibility of real-time photorealistic material visualization exploration and fine tuning in a 2d latent space we believe this feature set offers a useful solution for rapid masking material synthesis for novice and expert users alike and hope to see more exploratory works combining the advantages of multiple state-of-the-art learning algorithms in the future
