# AI Discovers Sentiment By Writing Amazon Reviews

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=hYWr67i8z5o
- **Дата:** 20.07.2019
- **Длительность:** 4:06
- **Просмотры:** 75,894

## Описание

❤️ Support the show on Patreon: https://www.patreon.com/TwoMinutePapers

₿ Crypto and PayPal links are available below. Thank you very much for your generous support!
› PayPal: https://www.paypal.me/TwoMinutePapers
› Bitcoin: 1a5ttKiVQiDcr9j8JT2DoHGzLG7XTJccX
› Ethereum: 0xbBD767C0e14be1886c6610bf3F592A91D866d380
› LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

📝 The paper "Learning to Generate Reviews and Discovering Sentiment" is available here:
https://openai.com/blog/unsupervised-sentiment-neuron/
https://arxiv.org/abs/1704.01444

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga.
https://www.patreon.com/TwoMinutePapers

Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Facebook: https://www.facebook.com/TwoMinutePapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=hYWr67i8z5o) Segment 1 (00:00 - 04:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In 2017, scientists at OpenAI embarked on an AI project where they wanted to show a neural network a bunch of Amazon product reviews and wanted to teach it to be able to generate new ones, or continue a review when given one. Now, so far, this sounds like a nice hobby project, definitely not something that would justify an entire video on this channel, however, during this experiment, something really unexpected happened. Now, it is clear that when the neural network reads these reviews, it knows that it has to generate new ones, therefore it builds up a deep understanding of language. However, beyond that, it used surprisingly few neurons to continue these reviews and scientists were wondering “why is that”? Usually, the more neurons, the more powerful the AI can get, so why use so few neurons? The reason for that is that it has learned something really interesting. I’ll tell you what in a moment. This neural network was trained in an unsupervised manner, therefore it was told to do what the task was, but was given no further supervision. No labeled datasets, no additional help, nothing. Upon closer inspection, they noticed that the neural network has built up a knowledge of not only language, but also built a sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it needs to be able to efficiently detect whether the review seems positive or not. And thus, it dedicated a neuron to this task, which we will refer to as the sentiment neuron. However, it was no ordinary sentiment neuron, it was a proper, state of the art sentiment detector. In this diagram, you see this neuron at work. As it reads through the review, it starts out detecting a positive outlook which you can see with green, and then, uh-oh, it detects that review has taken a turn and is not happy with the movie at all. And all this was learned on a relatively small dataset. Now, if we have this sentiment neuron, we don’t just have to sit around and be happy for it. Let’s play with it! For instance, by overwriting this sentiment neuron in the neural network, we can force it to create positive or negative reviews. Here is a positive example: “Just what I was looking for. Nice fitted pants, exactly matched seam to color contrast with other pants I own. Highly recommended and also very happy! ” And, if we overwrite the sentiment neuron to negative, we get the following: “The package received was blank and has no barcode. A waste of time and money. ” There are some more examples here on the screen for your pleasure. Absolutely amazing. This paper teaches us that we should endeavor to not just accept these AI-based solutions, but look under the hood, and sometimes, a goldmine of knowledge can be found within. If you have enjoyed this episode and would like to help us make better videos for you in the future, please consider supporting us on Patreon. com/TwoMinutePapers or just click the link in the video description. In return, we can offer you early access to these episodes, or even add your name to our key supporters so you can appear in the desciption of every video and more. We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. The majority of these funds is used to improve the show and we use a smaller part to give back to the community and empower science conferences like the Central European Conference on Computer Graphics. This is a conference that teaches young scientists to present their work at bigger venues later, and with your support, it’s now the second year we’ve been able to sponsor them, which warms my heart. This is why every episode ends with, you know the drill… Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/14281*