# DeepMind's New AI: As Smart As An Engineer... Kind Of! 🤯

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=x_cxDgR1x-c
- **Дата:** 17.03.2022
- **Длительность:** 6:58
- **Просмотры:** 295,019

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

📝 The paper "Competition-Level Code Generation with AlphaCode" is available here:
https://alphacode.deepmind.com/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

#deepmind #alphacode

## Содержание

### [0:00](https://www.youtube.com/watch?v=x_cxDgR1x-c) Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. You will not believe the things you will see in this episode, I promise. Earlier we saw that OpenAI’s GPT3 and Codex AI techniques can be used to solve grade school level math brain teasers, then, they were improved to be able to solve university-level math questions. Note that this technique required additional help to do that. And, a followup work could even take a crack at roughly a third of mathematical olympiad level math problems. And now, let’s see what the excellent scientists at DeepMind have been up to in the meantime. Check this out - this is AlphaCode. What is that? Well, in college, computer scientists learn how to teach the computers to program themselves. Now, DeepMind decided to instead, teach them how to program themselves. Wow. Now, here you see an absolute miracle in the making. Look. Here is the description of the problem, and here is the solution. Well, I hear you saying, Károly, there is no solution here. And you are indeed right, just give it a second. Yes, that’s right! Now, hold on to your papers, and marvel at how this AI is coding up the solution right on front of our eyes! But that’s nothing, we can also ask what the neural network is looking at. Check this out! It is peeping at different, important parts of the problem statement, and proceeds to write the solution taking these into consideration. You see that it also looks at different parts of the previous code that it had written to make sure that the new additions are consistent with those. Wow. That is absolutely amazing. It almost feels like watching a human solve this problem. Well, it solved this problem correctly. Unbelievable. So, how good is it? Well, it can solve about 34% of the problems in this dataset. What does that mean? Is that good or bad? Now, if you have been holding on to your papers so far, now, squeeze that paper, because it means that it roughly matches the expertise level of the average human competitor. Let’s stop there for a moment and think about that. An AI that understands of an English description mixed in with mathematical notation, and codes up a solution as well as the average human competitor, at least, on the tasks given in this dataset. So, what is this wizardry, and what is the key? Well, let’s pop the hood, and have a look together! Oh yes! One of the keys here is that it generates a ton of candidate programs and is able to filter them down to just a few promising solutions. And, it can do this quickly and accurately. This is huge. Why? Because this means that the AI is able to have a quick look at a computer program and tell with a pretty high accuracy whether this will solve the given task or not. It has an intuition of sorts, if you will. Now, interestingly, it also uses 41 billion parameters. 41 billion is tiny compared to OpenAI’s GPT-3, which has 175 billion. This means that currently, AlphaCode uses a more compact neural network, and it is possible that the number of parameters can be increased here to further improve the results. If we look at DeepMind’s track record of improving on these ideas, I have no doubt that however amazing these results seem now, we have really seen nothing yet. And wait, there is more - this is where I completely lost my papers: in the case that you see here, it even learned to invent algorithms. A simple one, mind you, this is DFS, a search algorithm that is taught in first-year undergrad computer science courses, but that does not matter. What matters is that this is an AI that can finally, invent new things. Wow. A Google engineer, who is also a world-class competitive programmer was also asked to have a look at these solutions, and he was quite happy with the results.

### [5:00](https://www.youtube.com/watch?v=x_cxDgR1x-c&t=300s) Segment 2 (05:00 - 06:00)

He said to one of the solutions that “it looks like it could easily be written by a human, very impressive! ” Now, clearly, it is not perfect, and thus, some criticisms were also voiced. For instance, sometimes it forgets about variables that remain unused. Even that is very humanlike, I must say. But, do not think of this paper as the end of something. This is but a stepping stone towards something much greater. And I will be honest - I can’t even imagine what we will have just a couple more papers down the line. What a time to be alive! So, what would you use this for? What do you think will happen a couple papers down the line? I’d love to know - let me know in the comments below. And, if you are also excited to hear about potential followup papers to this, make sure to subscribe and hit the bell icon. You definitely not want to not miss it when it comes! Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13627*