# Evolving Generative Adversarial Networks | Two Minute Papers #242

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=ni6P5KU3SDU
- **Дата:** 12.04.2018
- **Длительность:** 3:31
- **Просмотры:** 36,967
- **Источник:** https://ekstraktznaniy.ru/video/14485

## Описание

The paper "Evolutionary Generative Adversarial Networks" is available here:
https://arxiv.org/abs/1803.00657

Our Patreon page: https://www.patreon.com/TwoMinutePapers

Recommended for you:
Video game to reality conversion: https://www.youtube.com/watch?v=dqxqbvyOnMY

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

One-time payment links are available below. Thank you very much for your generous support!
PayPal: https://www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGc

## Транскрипт

### Segment 1 (00:00 - 03:00) []

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. With the recent ascendancy of neural network-based techniques, we have witnessed amazing algorithms that are able to take an image from a video game and translate it into reality and the other way around. Or, they can also translate daytime images to their nighttime versions, or change summer to winter and back. Some AI-based algorithms can also create near photorealistic images from our sketches. So the first question is, how is this wizardry even possible? These techniques are implemented by using Generative Adversarial Networks, GANs in short. This is an architecture where two neural networks battle each other: the generator network is the artist who tries to create convincing, real-looking images. The discriminator network is the critic that tries to tell a fake image from a real one. The artist learns from the feedback of the critic and will improve itself to come up with better quality images, and in the meantime, the critic also develops a sharper eye for fake images. These two adversaries push each other until they both become adept at their tasks. However, the training of these GANs is fraught with difficulties. For instance, it is not guaranteed that this process converges to a point, and therefore it matters a great deal when we stop training the networks. This makes reproducing some works very challenging and is generally not a desirable property of GANs. It is also possible that the generator starts focusing on a select set of inputs and refuses to generate anything else, a phenomenon we refer to as mode collapse. So how could we possibly defeat these issues? This work presents a technique that mimics the steps of evolution in nature: evaluation and selection and variation. First, this means that not one, but many generator networks are trained, and only the ones that provide sufficient quality and diversity in their images will be preserved. We start with an initial population of generator networks, and evaluate the fitness of each of them. The better and more diverse images they produce more fit they are, the more likely they are to survive the selection step where we eliminate the most unfit candidates. Okay, so we now see how a subset these networks become the victim of evolution. This is how networks get eaten, if you will. But, how do we produce new ones? And this is how we arrive to the variation step, where new generator networks are created by introducing variations to the networks that are still alive in this environment. This simulates the creation of an offspring, and will provide the next set of candidates for the next selection step, and we hope that if we play this game over a long time, we get more and more resilient offsprings. The resulting algorithm can be trained in a more stable way, and it can create new bedroom images when being shown a database of bedrooms. When compared to the state of the art, we see that this evolutionary approach offers higher quality images and more diversity in the outputs. It can also generate new human faces that are quite decent. They are clearly not perfect, but a technique that can pull this off consistently will be an excellent baseline for newer and better research works in the near future. We are also getting very close to an era where we can generate thousands of convincing digital characters from scratch to name just one application. What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!
