# Surprise Video With Our New Paper On Material Editing! 🔮

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=oYtwCZx5rsU
- **Дата:** 17.06.2020
- **Длительность:** 8:49
- **Просмотры:** 72,567
- **Источник:** https://ekstraktznaniy.ru/video/14113

## Описание

📝 Our "Photorealistic Material Editing Through Direct Image Manipulation" paper and its source code are available here:
https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/

The previous paper with the microplanet scene is available here:
https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, To

## Транскрипт

### <Untitled Chapter 1> []

dear fellow scholars this is not two minute papers with dr. Khurana here due to popular demand this is a surprise video with the talk of our new paper that we just published this was the third and last paper in my PhD thesis and hence this is going to be a one-off video that is longer and a tiny bit more technical I am keenly aware of it but I hope you'll enjoy it let me know in the comments when you have finished the video and worry not all the upcoming videos are going to be in the usual two minute papers format the paper and the source code are all available in the video description and now let's dive in a previous paper our goal was to populate this scene with over a hundred materials with a learning-based technique and create a beautiful planet with rich vegetation the results looked like this one of the key elements to accomplish this was to use a neural renderer or in other words the decoder network that you see here which took a material shader description as an input and predicted its appearance thereby replacing the renderer we used in the project it had its own limitations for instance it was limited to this scene with a fixed lighting setup and only the material properties were subject to change but in return it mimicked the global illumination renderer rapidly and faithfully and in this new work our goal was to take a different vantage point and help artists with general image processing knowledge to perform material synthesis now this sounds a little nebulous so let me explain one of the key ideas is to achieve this with a system that is meant to take images from its own renderer like the ones you see here but of course we produce these ourselves so obviously we know how to do it so this is not very useful yet however the twist is that we only start out with an image of this source material and then load it into a raster image editing program like Photoshop and edit it to our liking and just pretend that this is achievable with our render you see many of these target images in the middle are results of poorly executed edits for instance the stitched specular highlight in the first example isn't very well done and neither is the background of the gold target image in the middle however in the next step our method proceeds to find a photorealistic material description that when rendered resembles this target image and works well even in the presence of these poorly executed edits the whole process executes in 20 seconds to produce a mathematical formulation for this

### Optimization approach [2:42]

problem we started with this we have an input image T and edited to our liking to get the target image T with a shield now we are looking for a shader parameter set X that when rendered with the Phi operator approximates the edited image the constraint below stipulates that we would remain within the physical boundaries for each parameter for instance abidos between zero and one proper indices of refraction and so on so how do we deal with Phi we use the previously mentioned neural renderer to implement it otherwise this optimization process would take twenty five hours later we made an equivalent and constraint reformulation of this problem to be able to accommodate a greater set of optimizers this all sounds great on paper and works reasonably well for materials that can be exactly matched with this shader like this one this optimizer based solution can achieve it reasonably well but unfortunately for more challenging cases as you see the target image on the lower right the optimizers output leaves much to be desired note again that the result on the upper right is achievable with the shader while the lower right is a challenging imaginary material that we are trying to achieve the fact that this is quite difficult is not a surprise because we have a non linear and non convex optimization problem which is also high dimensional so this optimization solution is also quite slow but it can start inching towards the target image as an alternative solution we also

### Inversion networks [4:21]

developed something that we call an inversion Network this addresses the adjoint problem of neural rendering or in other words we show it the edited input image and out comes the shader that would produce this image we have trained nine different neural network architectures for this problem which sounds great so how well did it work well we found out that none of them are really satisfactory for more difficult edits because all of the target images are far outside of the training domain we just cannot prepare the networks to be able to handle the rich variety of edits that come from the artist however some of them are one could say almost usable for instance number one and five are not complete failures and note that these solutions are provided instantly so we have two techniques none of them are perfect for our task a fast and approximate solution with the inversion networks and a slower optimizer that can slowly inch towards the target image our key insight here is that we can produce a hybrid method that fuses the two solutions together the workflow goes as follows we take an image of the initial source material and edit it to our liking to get this target image then we create a course prediction with a selection of inversion networks to initialize the optimizer with the prediction of one of these neural networks preferably a good one so the optimizer can start out from a reasonable initial guess so how well does this hybrid method work I'll show you in a moment here we start out with an achievable

### Comparisons: qualitative [5:57]

target image and then try to challenging image editing operations this image can be reproduced perfectly as long as the inversion process works reliably unfortunately as you see here this is not the case in the first row using the optimizer and the inversion networks separately we get results that failed to capture the specular highlight properly the second row we have deleted the specular highlight on the target image on the right and replaced it with a completely different one I like to call this the Franken brdf and it would be amazing if we could do this but unfortunately both the optimizer and the inversion networks flounder another thing that would be really nice to do is deleting the specular highlight and filling the image via image in painting this kind of works with the optimizer but you see in a moment that it's not nearly as good as it could be and now if you look carefully you see that our hybrid method outperforms both of these techniques in each of the three cases in the paper we report results on a dozen more cases as well claimed in the paper where we say that these results are close to the global optimum you see the results of this hybrid method if you look at the intersection of neither Mead and nm they are highlighted with the red ellipses the records in the table show the RMS errors and are subject to minimization with this you see that this goes neck-and-neck with the global optimizer which is highlighted with green in summary our technique runs in approximately 20 seconds works for specular highlight editing image blending stitching in painting and more the proposed method is robust works even in the presence of poorly edited images and can be easily deployed in already-existing rendering systems and allows for rapid material prototyping for artists working in the industry it is also independent of the underlying principle shader so you can also add your own and expect it to work well as long as the new renderer works reliably a key limitation of the work is that it only takes images in this canonical scene with a carved sphere material sample but we conjecture that it can be extended to be more general and propose a way to do it in the paper make sure to have a closer look if you are interested the teaser image of this paper is showcased in the 20/20 computer graphics forum cover page the whole thing is also quite simple to implement and we provide the source code pre-trained networks on our website and all of them are under a permissive license thank you so much for watching this and the big thanks to Peter Wonka and Mahima for advising this work
