# Backpropagation For Neural Networks Explained | Deep Learning Tutorial

## Метаданные

- **Канал:** AssemblyAI
- **YouTube:** https://www.youtube.com/watch?v=LHLrXavT1gQ
- **Дата:** 29.12.2021
- **Длительность:** 7:56
- **Просмотры:** 25,016
- **Источник:** https://ekstraktznaniy.ru/video/13278

## Описание

In this Deep Learning tutorial, we learn about the Backpropagation algorithm for neural networks.

Get your Free Token for AssemblyAI Speech-To-Text API 👇
https://www.assemblyai.com/?utm_source=youtube&utm_medium=referral&utm_campaign=yt_pat_5

Backpropagation is probably the most important concept in deep learning and is essential for the training process of a neural network. 
Today we have a look at what Backpropagation is and how it works. We then walk you through an example with concrete numbers to better understand the theory behind the algorithm.

Slides: https://github.com/AssemblyAI/youtube-tutorials

Timestamps:
00:00 Introduction
00:39 Definition
01:44 Computational Graph
02:24 Chain Rule
03:03 Backpropagation algorithm
04:18 Example calculation
07:29 Outro

## Транскрипт

### Introduction []

hi everyone in this video we learn about the back propagation algorithm back propagation is probably the most important concept in deep learning and is essential for the training process of a neural network so today we have a look at what backpropagation is and how it works and then i also walk you through a concrete example with some numbers because i think this will help you to better understand the theory behind the algorithm this video is part of the deep learning explain series by assembly ai which is a company that creates a state-of-the-art speech to text api and if you want to try assembly ai for free you can grab your free api token using the link in the description and now let's get started backpropagation

### Definition [0:39]

computes the gradients of a loss function with respect to the weights in a neural network this gradient is then used to update the weights in the training step for example with an optimization algorithm like gradient descent now a quick side note i'm going to use the term gradient in this video all the time and with gradient i also mean derivative so here we have a neural network with an input layer a hidden layer and an output layer and at each neuron we have different weights and then we multiply the weights with the input x and maybe add a bias and now the way it works is that we first do a forward pass where we apply all those neurons and then calculate the loss at the very end and then we apply the back propagation algorithm which means we apply a backward pass and can then calculate the gradients with a special method and then with this gradient we can update the weights which means our neural network learns and gets better so we will have a closer look at the backward pass but before we do this we have to understand two more concepts the first concept is the concept of a

### Computational Graph [1:44]

computational graph when we create our network with all the neurons each computation in it is represented by a node so for example here we have a multiplication node that simply multiplies the two inputs x and w with each other and then of course we also have many more computations in this graph and at the very end we calculate the loss and like i said we then want to calculate the gradient of the loss with respect to the weights so the concept of the computational graph is the first thing we should keep in mind this is also what deep learning frameworks like pytorch and tensorflow use internally to track all the computations in the network and the second concept we should

### Chain Rule [2:24]

know is the chain rule this is a mathematical formula that is needed to calculate the gradients so here we have a simple computational graph with an input a that gets transformed by the first node and then we get the output b and this in turn gets transformed by the second node and we get the output c now the chain rule says that the gradient of c with respect to a can be computed by b times the gradient of b with respect to a so we should remember this formula and don't worry it's not that difficult when we look at a concrete example in a moment so going back to our

### Backpropagation algorithm [3:03]

computational graph we can now calculate the gradient of the loss with respect to the weights by saying it's y times the gradient of y with respect to w both of those inner gradients are also called local gradients and they can be calculated pretty easy for example if we have a look at this node here we know this is a multiplication node so we know the function or the calculations here y gets calculated by applying the function w times x and the derivative of w times x with respect to w is simply x so we can do this for all the nodes in our network which just are simple computational nodes and then we can also easily calculate the local gradients so we have to start at the very last node and then step by step go backwards to the first node and this is the whole concept of the backpropagation algorithm first we do a forward pass and do all calculations and calculate the loss then we compute all local gradients and then we do a backward pass and apply the chain rule so with this we calculate the gradient of the loss with respect to the weights and then of course we can update the weight somehow with this information and that's it so now let's take a look

### Example calculation [4:18]

at a concrete example to better understand all steps in this example we look at a simple linear regression algorithm we can also represent this with a neural network and a computational graph first we have a multiplication node that multiplies the weights and the input and we get an approximated y that we call y hat then we also have the actual y so we use a second subtraction node and we calculate y hat minus y and then we calculate the loss function which usually is the root mean squared error and to keep it simpler we only use the squared arrow here so we have one more note with a square operation and then obtain the loss now the task is to minimize the loss for example with the gradient descent method so for this we have to calculate the gradient of the loss with respect to the weights and we just learned we have to apply three steps first we do the forward pass and calculate the loss then at each node we calculate the local gradients starting at the end so we have d loss with respect to s then d s with respect to y hat and d y hat with respect to w and then we do the backward pass and can calculate the loss with respect to y hat and finally d loss with respect to w so this is what we need and we get this by applying the chain rule so let's use some actual numbers here so for example we know the input x and the corresponding y from the training data and we simply initialize the first weight with one so y-hat is the multiplication one times one which is one then we do the subtraction one minus two which is minus one and then we do the square operation so the loss is minus one squared which is one now let's calculate the local gradients d loss with respect to s we know the function so this is s squared and the gradient of s squared with respect to s is 2s next we calculate the gradient of s with respect to y hat so again we apply the actual calculations this is the gradient of y hat minus y with respect to y hat which is simply 1 and then we calculate the gradient of y hat with respect to w and y hat can be written as w times x and the gradient of this is x so now let's do the backward pass we calculate the gradient of the loss with respect to y hat by applying the chain rule so these two gradients are the two local gradients we just computed so this is two times s times one and we also know that s is minus one from our forward pass calculations so this is then -2 so the very last step is to calculate the gradient of the loss with respect to w again we apply the chain rule so here we have the gradient from the previous step the loss with respect to y hat times the local gradient d y hat with respect to w and then we insert the actual numbers minus two times x so this is minus two and then we reach the end and can update our weights

### Outro [7:29]

and that's basically it so if you couldn't follow every step right now this is fine you can get the slides using the link in the description and then you can go through it again in your own pace but i hope i could explain the concept of back propagation in a fairly simple way if you still have any questions then let me know in the comments and also if you enjoyed the video then please hit the like button and consider subscribing to our channel for more content like this and then i hope to see you in the next video bye
