NeuroSAT: An AI That Learned Solving Logic Problems
4:59

NeuroSAT: An AI That Learned Solving Logic Problems

Two Minute Papers 30.03.2019 74 531 просмотров 2 953 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ This video has been kindly supported by my friends at Arm Research. Check them out here! - http://bit.ly/2TqOWAu 📝 The paper "Learning a SAT Solver from Single-Bit Supervision" is available here: https://arxiv.org/abs/1802.03685 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Image an article sources: SAT: https://www.geeksforgeeks.org/2-satisfiability-2-sat-problem/ - Source: GeeksforGeeks NP-Completeness: https://en.wikipedia.org/wiki/List_of_NP-complete_problems Neural network image source: https://en.wikipedia.org/wiki/File:Neural_network_example.svg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/

Оглавление (1 сегментов)

Segment 1 (00:00 - 04:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. Please meet NeuroSAT. The name of this technique tells us what it is about, the neuro part means that it is a neural network-based learning method, and the SAT part means that it is able to solve satisfiability problems. This is a family of problems where we are given a logic formula, and we have to decide whether these variables can be chosen in a way such that this expression comes out true. That, of course, sounds quite nebulous, so let’s have a look at a simple example - this formula says that F is true if A is true and at the same time not B is true. So if we choose A to be true and B as false, this expression is also true, or in other words, this problem is satisfied. Getting a good solution for SAT is already great for solving many problems involving logic, however, the more interesting part is that it can help us solve an enormous set of other problems, for instance, ones that involve graphs describing people in social networks, and many others that you see here. This can be done by performing something that mathematicians like to call polynomial-time reduction or Karp-reduction, which means that many other problems that seem completely different can be converted into a SAT problem. In short, if you can solve SAT well, you can solve all of these problems well. This is one of the amazing revelations I learned about during my mathematical curriculum. The only problem is that when trying to solve big and complex SAT problems, we can often not do much better than random guessing, which, for some of most nasty ones, takes so long that it practically is never going to finish. And get this, interestingly, this work presents us with a neural network that is able to solve problems of this form, but not like this tiny-tiny baby problem, but much bigger ones. And, this really shouldn’t be possible. Here’s why. To train a neural network, we require training data. The input is a problem definition and the output is whether this problem is satisfiable. And we can stop right here, because here lies our problem. This doesn’t really make any sense, because we just said that it is difficult to solve big SAT problems. And here comes the catch! This neural network learns from SAT problems that are small enough to be solved by traditional handcrafted methods. We can create arbitrarily many training examples with these solvers, albeit these are all small ones. And that’s not it, there are three key factors here that make this technique really work. One, it learns from only single-bit supervision. This means that the output that we talked about is only yes or no. It isn’t shown the solution itself. That’s all the algorithm learns from. Two, when we request a solution from the neural network, it not only tells us the same binary yes-no answer that was used to teach it, but it can go beyond that, and when the problem is satisfiable, it will almost always provide us with the exact solution. It is not only able to tell us whether the problem can be solved, but it almost always provides a possible solution as well. That is indeed remarkable. This image may be familiar from the thumbnail, and here you can see how the neural network’s inner representation of how these variables change over time as it sees a satisfiable and unsatisfiable problem and how it comes to its own conclusions. And three, when we ask the neural network for a solution, it is able to defeat problems that are larger and more difficult than the ones it has trained on. So this means that we train it on simple problems that we can solve ourselves, and using these as training data, it will be able to solve much harder problems that we can’t solve ourselves. This is crucial, because otherwise, this neural network would only be as good as the handcrafted algorithm used to train it. Which in other words, is not useful at all. Isn’t this amazing? I will note that there are handcrafted algorithms that are able to match, and often outperform NeuroSAT, however, those took decades of research work to invent, whereas this is a learning-based technique that just looks at as little information as the problem definition and whether it is satisfiable, and it is able to come up with a damn good algorithm by itself. What a time to be alive. This video has been kindly supported by my friends at ARM Research, make sure to check them out through the link in the video description. Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник