# AI-Based Large-Scale Texture Synthesis | Two Minute Papers #252

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=KL6U6iasUxs
- **Дата:** 26.05.2018
- **Длительность:** 3:23
- **Просмотры:** 29,558

## Описание

The paper "Non-stationary Texture Synthesis by Adversarial Expansion" and its source code is available here:
http://vcc.szu.edu.cn/research/2018/TexSyn
https://github.com/jessemelpolio/non-stationary_texture_syn

Errata: please note that the image at the start of the video is of a wrong paper. Apologies! 

Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers

One-time payment links and crypto addresses are available below. Thank you very much for your generous support!
PayPal: https://www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

Thumbnail background image credit: https://pixabay.com/photo-3013486/
Texture tiling image credit: https://commons.wikimedia.org/wiki/File:In-game-view-doom.png
Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Facebook: https://www.facebook.com/TwoMinutePapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=KL6U6iasUxs) Segment 1 (00:00 - 03:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. When an artist is in the process of creating digital media, such as populating a virtual world for an animation movie or a video game, or even in graphic design, the artist often requires a large number of textures for these kinds of works. Concrete walls, leaves, fabrics are materials that we know well from the real world and sometimes the process of obtaining textures is as simple as paying for a texture package and using it. But the problem quite often occurs that we wish to fill an entire road with a concrete texture, but we only have a small patch at our disposal. In this case, the easiest and worst solution is to copy-paste this texture over and over, creating really unpleasant results that are quite repetitive and suffer from seams. So what about an AI-based technique that looks at a small patch, and automatically continues it in a way that looks natural and seamless. This is an area within computer graphics and AI that we call texture synthesis. Periodic texture synthesis is simple, but textures with structure are super difficult. The selling point of this particular work is that it is highly efficient at taking into consideration the content and symmetries of the image. For instance, it knows that it has to take into consideration the concentric nature of the wood rings when synthesizing this texture, and it can also adapt to the regularities of this water texture and create a beautiful, high-resolution result. This is a neural-network based technique, so first, the question is, what should the training data be? Let's take a database of high-resolution images. Let's cut out a small part and pretend that we don't have access to the bigger image and ask a neural network to try to expand this small cutout. This sounds a little silly, so what is this trickery good for? Well, this is super useful because after the neural network has expanded the results, we now have a reference result in our hands that we can compare to and this way, teach the network to do better. Note that this architecture is a generative adversarial network, where two neural networks battle each other. The generator network is the creator that expands the small texture snippets, and the discriminator network takes a look and tries to tell it from the real deal. Over time, the generator network learns to be better at texture synthesis, and the discriminator network becomes better at telling synthesized results from real ones. Over time, this rivalry leads to results that are of extremely high quality. And as you can see in this comparison, this new technique smokes the competition. The paper contains a ton of more results and comparisons, and one of the most exhaustive evaluation sections I've seen in texture synthesis so far. I highly recommend reading it. If you would like to see more episodes like this, make sure to pick up one of the cool perks we offer through Patreon, such as deciding the order of future episodes, or getting your name in the video description of every episode as a key supporter. We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. We had a few really generous pledges in the last few weeks. I am quite stunned to be honest, and I regret that I cannot come in contact with these Fellow Scholars. If you can contact me, that would be great, if not, thank you so much everyone for your unwavering support. This is just incredible. Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/14464*