# We Taught an AI To Synthesize Materials 🔮

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=cnquEovq1I4
- **Дата:** 23.05.2018
- **Длительность:** 4:52
- **Просмотры:** 105,222

## Описание

The paper "Gaussian Material Synthesis" and its source code is available here:
https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/

Our Patreon page: https://www.patreon.com/TwoMinutePapers

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

One-time payment links are available below. Thank you very much for your generous support!
PayPal: https://www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

Credits:
We would like to thank Robin Marin for the material test scene and
Vlad Miller for his help with geometry modeling. Scene and geometry credits: Gold Bars – JohnsonMartin, Christmas Ornaments – oenvoyage, Banana – sgamusse, Bowl – metalix, Grapes – PickleJones, Glass Fruits – BobReed64, Ice cream – b2przemo, Vases – Technausea, Break Time – Jay–Artist, Wrecking Ball – floydkids, Italian Still Life – aXel, Microplanet – marekv, Microplanet vegetation – macio.

Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Facebook: https://www.facebook.com/TwoMinutePapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

#neuralrendering

## Содержание

### [0:00](https://www.youtube.com/watch?v=cnquEovq1I4) Segment 1 (00:00 - 04:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. Due to popular request, here's a more intuitive explanation of our latest work. Believe it or not, when I have started working on this, Two Minute Papers didn't even exist. In several research areas, there are cases where we can't talk about our work until it is published. I knew that the paper would not see the light of the day for quite a while, if ever, so I started Two Minute Papers to be able to keep my sanity and deliver a hopefully nice piece of work on a regular basis. In the end, this took more than 3000 work hours to complete, but it is finally here, and I am so happy to finally be able to present it to you. This work is in the intersection of computer graphics and AI, which you know is among my favorites. So what do we see here? This beautiful scene contains more than a 100 different materials, each of which has been learned and synthesized by an AI. None of these daisies and dandelions are alike, each of them have a different material model. The goal is to teach an AI the concept of material models such as metals, minerals and translucent materials. Traditionally, when we are looking to create a new material model with a light simulation program, we have to fiddle with quite a few parameters, and whenever we change something, we have to wait from 40 to 60 seconds until a noise-free result appears. In our solution, we don't need to play with these parameters. Instead, our goal is to grab a gallery of random materials, assign a score to each of them saying that I liked this one, I didn't like that one, and get an AI to learn our preferences and recommend new materials for us. This is quite useful when we’re looking to synthesize not only one, but many materials. So this is learning algorithm number one, and it works really well for a variety of materials. However, these recommendations still have to be rendered with a light simulation program, which takes several hours for a gallery like the one you see here. Here comes learning algorithm number two to the rescue, a neural network that replaces this light simulation program and creates photorealistic visualizations. It is so fast, it not only does this in real time, but it is more than 10 times faster than real time. We call this a neural renderer. So we have a lot of material recommendations, and they are all photorealistic that we can visualize in real time. However, it is always a possibility that we have a recommendation that is almost exactly what we had in mind, but need a few adjustments. That’s an issue, because to do that, we would have to go back to the parameter fiddling, which we really wanted to avoid in the first place. No worries, because the third learning algorithm is coming to the rescue. What this can do is take our favorite material models from the gallery, and map them onto a nice 2D plane where we can explore similar materials. If we combine this with the neural renderer, we can explore these photorealistic visualizations and everything is appears not in a few hours, but in real time. However, without a little further guidance, we get a bit lost because we still don’t know which regions in this 2D space are going to give us materials that are similar to the one we wish to fine-tune. We can further improve this by exploring different combinations of the three learning algorithms. In the end, we can assign these colors to the background that describe either whether the AI expects us to like the output, or how similar the output will be. A nice use-case of this is where we have this glassy still life scene, but the color of the grapes is a bit too vivid for us. Now, we can go to this 2D latent space, and adjust it to our liking in real time. Much better. No material modeling expertise is required. So I hope you have found this explanation intuitive. We tried really hard to create something that is both scientifically novel and also useful for the computer game and motion picture industry. We had to throw away hundreds of other ideas until this final system materialized. Make sure to have a look at the paper in the description where every single element and learning algorithm is tested and evaluated one by one. If you are a journalist and you would like to write about this work, I would be most grateful, and I am also more than happy to answer questions in an interview format as well. Please reach out if you’re interested. We also try to give back to the community, so for the fellow tinkerers out there, the entirety of the paper is under the permissive Creative Commons license, and the full source code and pre-trained neural networks are also available under the even more permissive MIT license. Everyone is welcome to reuse it or build something cool on top of it. Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/14466*