# DeepMind's AI Learns Superhuman Relational Reasoning | Two Minute Papers #168

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=vzg5Qe0pTKk
- **Дата:** 06.07.2017
- **Длительность:** 3:53
- **Просмотры:** 100,348
- **Источник:** https://ekstraktznaniy.ru/video/14631

## Описание

The paper "A simple neural network module
for relational reasoning" is available here:
https://arxiv.org/abs/1706.01427

Details on our Patreon page:
https://www.patreon.com/TwoMinutePapers

More on Long Short-Term Memory:
Recurrent Neural Network Writes Music and Shakespeare Novels - https://www.youtube.com/watch?v=Jkkjy7dVdaY
Recurrent Neural Network Writes Sentences About Images - https://www.youtube.com/watch?v=e-WB4lfg30M

Two Minute Papers Merch:
US: http://twominutepapers.com/
EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/

WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Sunil Kim, VR Wizard.
https://www.patreon.com/TwoMinutePapers

Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)
Artist: http://audionautix.c

## Транскрипт

### Segment 1 (00:00 - 03:00) []

Dear fellow scholars, this is two minute papers with Koa Eher. This paper is from the Google Deep Mind Guys and is about teaching neural networks to be capable of relational reasoning. This means that we can present the algorithm with an image and ask it relatively complex relational questions. For instance, if we show it this image and ask what is the color of the object that is closest to the blue object, it would answer red. This is a particularly difficult problem because all the algorithm has access to is a bunch of pixels. In computer code, it is near impossible to mathematically express that in an image something is below or next to something else, especially in three-dimensional scenes. Beyond the list of colors, this requires a cognitive understanding of the entirety of the image. This is something that we humans are amazingly good at, but computer algorithms are dreadful for this type of work. And this work almost feels like teaching common sense to a learning algorithm. This is accomplished by augmenting an already existing neural network with a relational network module. This is implemented on top of a recurrent neural network that we call long short-term memory or LSTM that is able to process sequences of information. for instance, an input sentence. The more seasoned fellow scholars know that we have talked about LSTMs in earlier episodes, and of course, as always, the video description contains these episodes for your enjoyment. Make sure to have a look. You'll love it. As you can see in this result, this relational reasoning also works for three-dimensional scenes as well. The aggregated results in the paper show that this method is not only leaps and bounds beyond the capabilities of already existing algorithms but and now hold on to your papers in many cases it also shows superhuman performance. I love seeing these charts in machine learning papers where several learning algorithms and humans are benchmarked on the same tasks. This paper was barely published and there is already a first unofficial public implementation and two research papers have already referenced it. This is such a great testament to the incredible pace of machine learning research these days. To say that it is competitive would be a huge understatement. Achieving highquality results in relational reasoning is an important cornerstone for achieving general intelligence. And even though there is still much more to do, today is one of those days when we can feel that we are a part of the future. The failure cases are also reported in the paper and are definitely worthy of your time and attention. When I asked for permissions to cover this paper in the series, all three scientists from DeepMind happily answered yes within 30 minutes. That's unbelievable. Thanks, guys. Also, some of these questions sound like ones that we would get in the easier part of an IQ test. I wouldn't be very surprised to see a learning algorithm complete a full IQ test with flying colors in the near future. If you enjoyed this episode and you feel that eight of these videos a month is worth a dollar, please consider supporting us on Patreon. This way, we can make better videos for your enjoyment. We have recently reached a new milestone which means that part of these funds will be used to empower research projects. Details are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
