# These Natural Images Fool Neural Networks (And Maybe You Too)

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=cpxtd-FKY1Y
- **Дата:** 31.12.2019
- **Длительность:** 4:55
- **Просмотры:** 123,028

## Описание

❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers

Their blog post on training a neural network is available here: https://www.wandb.com/articles/mnist 

📝 The paper "Natural Adversarial Examples" and its dataset are available here:
https://arxiv.org/abs/1907.07174
https://github.com/hendrycks/natural-adv-examples 

Andrej Karpathy's image classifier: https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html

You can also join us here to get early access to these videos: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh.
https://www.patreon.com/TwoMinutePapers 

Thumbnail background image credit: https://pixabay.com/images/id-4344997/
Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=cpxtd-FKY1Y) Segment 1 (00:00 - 04:00)

dear fellow scholars this is two minute papers with károly on IFA here in the last few years neural network based learning algorithms became so good at image recognition tasks that they can often rival and sometimes even outperform humans in these endeavors beyond making these neural networks even more accurate in these tasks interestingly there are plenty of research works on how to attack and mislead these neural networks I think this area of research is extremely exciting and I'll now try to show you why one of the first examples of an adversarial attack can be performed as follows we present such a classifier with an image of a bus and it will successfully tell us that yes this is indeed a bus nothing too crazy here now we show it not an image of a bus but a bus plus some carefully crafted noise that is barely perceptible that forces the neural network to miss classify it as an ostrich I will stress that this is not any kind of noise but the kind of noise that exploits biases in the neural network which is by no means easy or trivial to craft however if we succeed at that this kind of adversarial attack can be pulled off on many different kinds of images everything that you see here on the right will be classified as an ostrich by the neural network these noise patterns were crafted for in a later work researchers of the Google brain team found that we can not only coerce the neural network into making some mistake but we can even force it to make exactly the kind of mistake we want this example here reprograms an image classifier to count the number of squares in our images however interestingly some adversarial attacks do not need carefully-crafted noise or any tricks for that matter did you know that many of them occur naturally in nature this new work contains a brutally hard data set with such images that throw off even the best neural image recognition systems let's have a look at an example if I were the neural network I would look at this squirrel and claim that with high confidence I can tell you that this is a sea lion and you human may think that this is a dragonfly but you would be wrong I am pretty sure that this is a manhole cover well except that is not the paper shows many of these examples some of which don't really occur in my brain for instance I don't see this mushroom as a pretzel at all but there was something about that dragonfly that upon a cursory look may get registered as a manhole cover if you look quickly you see a squirrel here just kidding it's a bullfrog I feel that if I look at some of these with a fresh eye sometimes I get a similar impression as the neural network I put up a bunch of more examples for you here let me know in the comments which are the ones that got you a very cool project I love it what's even better this dataset by the name image net a is now available for everyone free of charge and if you remember at the start of the video I said that it is brutally hard for neural networks to identify what is going on here so what kind of success rates can we expect 70% maybe 50% nope 2% Wow in a world where some of these learning based image classifiers are better than us at some data sets they are vastly outclassed by us humans on these natural adversarial examples if you have a look at the paper you will see that the currently known techniques to improve the robustness of training show little to no improvement on this I cannot wait to see some follow-up papers on how to crack this nut we can learn so much from this paper and will likely learn even more from these follow-up works make sure to subscribe and also hit the bell icon to never miss future episodes what a time to be alive this episode has been supported by weights and biases provides tools to track your experiments in your deep learning projects it can save you a ton of time and money in these projects and is being used by open AI Toyota research Stanford and Berkeley in this post they show you how to train a state of the art machine learning model with over 99% accuracy on classifying squiggly written numbers and how to use their tools to get a crystal clear understanding of what your model exactly does and what part of the letters it is looking at make sure to visit them through Wendy b. com slash papers w AMD b. com slash papers or just click the link in the video description and you can get a free demo today our thanks to weights and biases for helping us make better videos for you thanks for watching and for your generous support and I'll see you next time

---
*Источник: https://ekstraktznaniy.ru/video/14203*