Why Should We Trust An AI? | Two Minute Papers #233
3:25

Why Should We Trust An AI? | Two Minute Papers #233

Two Minute Papers 03.03.2018 18 470 просмотров 949 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
The paper "Why Should I Trust You? - Explaining the Predictions of Any Classifier" and its implementation is available here: http://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf https://github.com/marcotcr/lime Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-563428/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/

Оглавление (1 сегментов)

Segment 1 (00:00 - 03:00)

dear fellow scholars this is two minute papers with károly on IFA here through over 200 episodes of this series we talked about many learning based algorithms that are able to solve problems that previously seemed completely impossible they can look at an image and describe what they depict in a sentence or even turn video game graphics into reality and back amazing new results keep appearing every single week however an important thing that we need to solve is that if we deploy these neural networks in a production environment we would want to know if we are relying on a good or bad a Oz decision the narrative is very simple if we don't trust the classifier we won't use it and perhaps the best way of earning the trust of a human would be if the AI could explain how it came to a given decision strictly speaking a neural network can explain it to us but it will show us hundreds of thousands of neuron activations that are completely unusable for any sort of intuitive reasoning so what is even more difficult to solve is that this explanation happens in a way that we can interpret an earlier approach use decision trees that described what the learner looks at and how it uses this information to arrive to a conclusion this new work is quite different for instance imagine that a neural network would look at all the information we know about a patient and tell us that this patient likely has the flu and in the meantime it could tell us that the fact that the patient has a headache and sneezes a lot contributed to the conclusion that he has the flu but the lack of fatigue is notable evidence against it our doctor could take this information and instead of blindly relying on the output could make a more informed decision a fine example of a case where AI does not replace but augment human labor an elegant tool for a more civilized age here we see an example image where the classifier explains which region contributes to the decision that this image depicts a cat and which region seems to be counter evidence we can use this and not only for tabulated patient data and images but text as well in this other example we try to find out whether a piece of written text is about Christianity or atheism note the decision itself is not as simple as looking for a few keywords even a mid tier classifier is much more sophisticated than that but it can tell us about the main contributing factors a big additional selling point is that this technique is model agnostic which means that it can be applied to other learning algorithms that are able to perform classification it is also a possibility that an AI is only right by chance and if this is the case we should definitely know about that and here in this example with the additional explanation it is rather easy to find that we have the bad model that looks at the background of the image and thinks that it is the fur of a wolf the tests indicate that humans make significantly better decisions when they lean on explanations that are extracted by this technique the source code of this project is also available thanks for watching and for your generous support and I'll see you next time

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник