# Growing Neural Cellular Automata

## Метаданные

- **Канал:** Yannic Kilcher
- **YouTube:** https://www.youtube.com/watch?v=9Kec_7WFyp0
- **Дата:** 12.02.2020
- **Длительность:** 15:48
- **Просмотры:** 23,699
- **Источник:** https://ekstraktznaniy.ru/video/13869

## Описание

The Game of Life on steroids! This model learns to grow complex patterns in an entirely local way. Each cell is trained to listen to its neighbors and update itself in a way such that, collectively, an overall goal is reached. Fascinating and interactive!

https://distill.pub/2020/growing-ca/
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life

Abstract:
Most multicellular organisms begin their life as a single egg cell - a single cell whose progeny reliably self-assemble into highly complex anatomies with many organs and tissues in precisely the same arrangement each time. The ability to build their own bodies is probably the most fundamental skill every living creature possesses. Morphogenesis (the process of an organism’s shape development) is one of the most striking examples of a phenomenon called self-organisation. Cells, the tiny building blocks of bodies, communicate with their neighbors to decide the shape of organs and body plans, where to grow each organ, how to interconne

## Транскрипт

### Introduction []

hi there today I thought we would be looking at growing neural cellular automata which is an article on the still dot pub which I found pretty neat so this is kind of an interactive article if you don't know this still dot pop check it out it is a cool new concept as an alternative to the classical journals or the conference system so what allows you to do is to kind of write articles that are a bit more interactive a bit more in like engaging and don't have the there's no PDS there's no pages there animations and so on so I think we'd be looking at this article today which is kind of growing those neural cellular automata so if you don't know what cellular automata are this is a very kind of old concept the most famous one is called the game of life where you have these cells here you can see every pixel is a cell and they follow some kind of update rule and usually it's the update rule something like if my neighbor is alive I'm going to be alive as well in the next time step and if or if enough neighbors are alive and if only there are few neighbors are alive I'm going to die so this gives rise to these kind of patterns and the air is the same is done with color and the update rules are a bit more complicated so basically here traveler oh nice ok so in the game of life if you play it the most prestigious thing to get is are these kind of travelers I have not this is the first time I've managed to do this in this thing so what does it do so each pixel here is kind of an autonomous thing that is only allowed to look at its neighbors in order to decide whether or not in the next time step it is going to be alive look it's like incorporating again so each cell looks at its neighbors and then decides what its next state will be and here it's not only alive or dead that would be white and the live would be anything else but it is also I guess this white isn't it is also the color so that each cell decides on what color it should have and then this is a live thing so it kind of reproduces right you can see if I start it new if you double click here it grows from somewhere else and this is completely local so these cells really only look at their neighbors that's the special part right they don't look at the global structure it's not like again that can look at the entire picture and decide what what's still missing what this can also do if you destroy part of it they can kind of grow back just again just out of local update rules of the level of the individual cells and their neighbors they're trying to do these big structures so let's look at how they do it so basically here's how they model a cell and let's go over here so each cell as I said is made up of 16 channels and here it's modeled as 3 by 3 but I think each cell is really one pixel and each cell is allowed to look at its 8 neighbors right so across 16 different channels and the 16 channels here mean the first three are RGB so this is the actual color that is seen then there is an alive or dead Channel so and what they call an alpha Channel so if this channel is high the cell is considered alive otherwise it is considered dead and not part of the pattern so a cell can come alive or die depending on its neighbors and then the rest 12 channels are what they call hidden channels so the cell is allowed to encode some Hades there so there's each cell is represented by the 16 dimensional vector which is not much right and then each cell is allowed to look at three things so from the bottom here it's allowed to look at its own state so at its own 16 dimensional vectors and it is allowed to look at its neighbors and it does this by doing a convolution with a soap oil filter and the sobel filter is simply a fixed filter that you do a 3x3 convolution with as you can see here is basically a gradient filter so it basically measures the difference between what's to the left of the cell and what's to the right here in the southern y-direction the same in the y-direction so it's basically allowed to look at gradients in states of its neighbors this is modeled after real cells kind of looking at chemical gradients in their neighborhoods so and this is all this is all that the cell has to decide what it's supposed to do next right and what we want is we want that each individual cell only looking in its neighbors produces in total they will produce this kind of a very complex pattern so the updated rule is the following you convolute with the sample filters and you take the cell identity you put this all into a vector you put it through a very small neural network so this is one dense layer one relu and then another dense layer to get the next 16 dimensional vector which is the next state and that defines your update rules that doesn't really define the next state that defines the Delta to the next state kind of like a residual neural network so basically which cells need to come alive in the next time step which cells need to die and how are they to change their colors right and then you get the output of the next step

### Update Rule [6:27]

right so that's basically the entire thing so all that is learned here is the update rule of the neural net right so basically the neural network decides it looks at a cell and its neighbors and decides what the information in the cell in the next step should be right and you do this for multiple time steps let's I want actually want to go down here you do this for multiple time steps the initial state is simply one cell that is alive here in the middle everything else is dead this cell is alive and black you do this from many steps right and then at some point you get an output and you compare the output to your desired output you compute a loss that is differentiable and because your update rule is differentiable and your loss is differentiable you can backprop through time to the original pattern here and you can basically learn this update rule by back dropping through time this is a bit like an LS TM and if you see in the architecture here I think this residual connection is really the key to making this work over time because usually I would not expect something like this to easily emerge over time because you have the problem of vanishing and exploding gradients and you have no way of mitigating this problem here in this simple neural network but in case they backprop through time here so each of these update steps which again this isn't one neural network with many layers this is the same neural network applied over and over and over again and then there is a loss computed so basically the gradients will accumulate over these steps basically tell the network what it needs to adjust to go from this one single block pixel to this final desired state if you do this over and over again you learn things you learn a update rule that will give rise to that pattern hopefully now here is a kind of an illustration of this alive and dead thing so what they do is they consider cells that have an alpha channel and said this one of these channels called for they have an alpha channel above 0. 1 it's considered a live right and part of the loss then the neighbors of these cells that are below 0. 1 but our neighboring a cell that is mature alive they they're called growing they're also part of the loss right so simply by being close to something someone that is alive a cell that is alive you are considered alive as well but your neighbors aren't right he likes only the neighbors of really alive so there's a really alive kind of alive and then there is dead and dead the meaning of dead here at the gray ones is that they're not they won't become part of the pattern part of the loss right they're dead alright so what will this

### Animation [9:45]

get you initially so here is an animation if they train this just like that just backdrop through time with a target pattern and then they let it run you see these patterns actually emerge so that's pretty cool but then if you let them run for a longer than they've been trained you basically have no guarantees on what's going to happen like these update rules were simply trained to achieve the pattern within a certain number of steps right if you run for more than that and apply the update rules for longer than that you have like there's little like you have no guarantee you what's going to happen these update rules will simply continue as you can see here and produce some weird stuff so they are trying to fix this so what they do is basically they train for a longer but they do it in a kind of different way so at each step of training and as a step I mean a batch over these number of time steps so they the sample a batch initially it's just all black pixels right as we see how evolved and then they um optimize for these number of time steps and then at the end so what they do is they don't always start from the back pixel but sometimes they also start from a previously seen in the state so basically they take the end state of a previous training run and then they just continue from that instead of starting from the initial point and you see after some training they get better and better so initially you see the thing on the left here being a starting state and then it progressively gets better so basically by starting from end states of other things you learn to so if the end state of the other thing isn't very good you basically learn to go to the good pattern to that pattern you want but of course over time there's going to be more and more of these end States that you train from that are already pretty close to the pattern you want and so then what that means is you learn to reproduce the pattern so you are already at a good point you learn to stay at that good point and then that enables you to basically learn update rules that if you're not at the pattern you want they go towards the pattern you want but also if you run for longer if you are already are at the pattern you want then you stay at the pattern you want so that's what we basically saw in the very initial demonstration where you could this is a live demonstration like this thing up here this is alive this is running right and you see the update rules stated they are continuously applied they basically stay at the pattern where they are and that is also that is learned because of this protocol that you train from end states as well as from beginning states so the next thing is what I'm doing here is I can destroy part of the pattern and it will kind of regrow right here so this is also a part so for now we've also only learned to go from a single pics look like here from a black pixel to the pattern but now we also want to learn to go to regrow when destroyed because that is this you can see this is modeled after a kind of live tissue so here you can see the their parts are cut away and then the these cells try to regrow so this is I think initially when you just train them they exhibit some of that property but not like very satisfying in some cases so what they do is they train not only do they use end States like we saw before but also some of their training samples are simply the pattern destroyed of it so as you can see in some of these samples like these here they in each sample they kind of cut out part of the sample and they train the update rules to regrow that part that gives you that now gives you the ability to if you damage too pretty consistently regrow the pattern as you can see here and they also train for rotation which is non-trivial if you have these kind of pixel based models but I jumped up because I want to keep it kind of short here so the entire goal of this is to kind of model at the behavior of that of natural cells because the natural cells they don't have an overarching view they only have the view of their neighbors right and they are able to grow in two very complex structures I invite you to give this a try the de still don't pop journal is very cool it's very interactive you can play around with it you can reproduce things in a collab and yeah shout out to the authors here Alexander Moore twin self a torrid so sorry run dot so Evan Nicholson and Michael 11 yeah that was it for me thanks for watching and bye
