# Pruning Makes Faster and Smaller Neural Networks | Two Minute Papers #229

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=3yOZxmlBG3Y
- **Дата:** 19.02.2018
- **Длительность:** 3:28
- **Просмотры:** 28,289
- **Источник:** https://ekstraktznaniy.ru/video/14511

## Описание

The paper "Learning to Prune Filters in Convolutional Neural Networks" is available here:
https://arxiv.org/pdf/1801.07365.pdf

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

One-time payment links are available below. Thank you very much for your generous support!
PayPal: https://www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)
Artist

## Транскрипт

### Segment 1 (00:00 - 03:00) []

dear fellow Scholars this is 2minute papers with car when we are talking about deep learning neural networks that have tens sometimes hundreds of layers and hundreds of neurons Within These layers this is an enormous number of parameters to train and clearly there should be some redundancy some duplication in the information within this paper is trying to throw out many of these neurons of the network without affecting its accuracy too much this process we shall call pruning and it helps creating neural networks that are faster and smaller the accuracy term I used typically means a score on a classification task in other words how good this learning algorithm is in telling what an image or video depicts this particular technique is specialized for pruning convolutional neural networks where the neurons are endowed with a small receptive field and are better suited for images these neurons are also commonly referred to as filters so here we have to provide a good mathematical definition of a proper pruning the authors proposed a definition where we can specify a maximum accuracy drop that we deem to be acceptable which will be denoted with the letter B in a moment and the goal is to prune as many filters as we can without going over the specified accuracy loss budget the pruning process is controlled by an accuracy and efficiency term and the goal is to have some sort of balance between the two to get a more visualized understanding of what is happening here the filters you see outlined with the red border are kept by the algorithm and the rest are discarded as you can see the algorithm is not as trivial as many previous approaches that just prune away filters with weaker responses here you see the table with the B numbers initial tests reveal that around a quarter of the filters can be pruned with an accuracy loss of 0. 3% and with a higher B we can prune more than 75% of the filters with a loss of around 3% this is incredible image segmentation tasks are about finding the regions that different objects inhabit interestingly when trying the pruning for this task it not only introduces a minimal loss of accuracy in some cases the prune version of the neural network performs even better how cool is that and of course the best part is that we can choose a tradeoff that is appropriate for our application for instance if we are looking for a light cleanup we can use the first option at a minimal penalty or if we wish to have a tiny neural network that can run on a mobile device we can look for the more heavy-handed approach by sacrificing just a tiny bit more accuracy and we have everything in between there is plenty more validation for the method in the paper make sure to have a look it is really great to see that new research Works make neural networks not only more powerful over time but there our efforts in making them smaller and more efficient at the same time great news indeed thanks for watching and for your generous support and I'll see you next time
