# OpenAI’s New AI: Video Game Addict No More! 🤖

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=263vx1g52eM
- **Дата:** 30.11.2022
- **Длительность:** 6:21
- **Просмотры:** 285,380

## Описание

❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 

📝 The paper "Efficient Training of Language Models to Fill in the Middle" is available here:
https://arxiv.org/abs/2207.14255

Code for benchmarking the model is available here (note: this is not the full source code):
https://github.com/openai/human-eval-infilling

I think you can try it with GPT-3 with the text-davinci-003 model: https://beta.openai.com/docs/models/gpt-3

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background image credit: https://pixabay.com/images/id-1106252/
Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=263vx1g52eM) Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today we are going to use OpenAI’s  new language model AI to get it to   write our homework essays for us, see  how well it does that, and get this,   we will even ask it to write some computer  code for us. And it gets even better,   it does all this for free! But not in the way  you think. You will find out in a moment why. Now, let’s see. First, we write, “When I  was young, I only liked to p”. And then,   let’s say something comes here between, and  then, “and that’s how first I got interested in   AI research. ” Now we ask the AI to fill in the  middle part. Let’s see what happens together. This indeed filled in the middle. It says,  when I was young, I only liked to play   video games. I would play sometimes more  than 13 hours per day. The rush, novelty,   and variety were beyond anything real life  could offer. (Whoa, hold it right there,   little AI! ) It continues - I loved the  challenge and I excelled at it. I would   often skip classes and go to and that’s  how first I got interested in AI research. But there is a problem. Did you notice the  problem? Yes, the video game addiction is   kind of a problem. But, that’s not what  I meant. It gets worse. Grammatically,   it kind of connects to the suffix, the text  that comes after, it is not nearly perfect,   but, perhaps the worst thing is that there  is no logical connection between them. It   does not explain how we ended up being an AI  researcher. Although Demis Hassabis, the CEO of   the DeepMind AI lab often says in interviews that  he played a ton of video games back in his day,   so this AI may be up to something. But  this new method can do way better. Look. “When I was young, I only liked to play video  games. Over time, I started thinking if it’d be   possible to make bots to play better than any  human can ever play these games. I eventually   decided I liked working on the latter more than  playing the games themselves and that’s how first   I got interested in AI research. ” Now that’s  a scholarly completion! Excellent. Loving it. And, it has more absolutely magical  capabilities. For instance, here,   we can start diving into our narrative  immediately by talking about Wikipedia,   and, look. If we give it some space  before it, the AI will recognize that   telling the audience what Wikipedia is is  necessary, and it does it. Really cool! And, if we feel even lazier, we can write  the start of the story, the end of the story,   and now, off you go little AI, and you  do the hard work. Let’s read it. The   commercial diver finally thought he’d snagged  a big catch when he saw something white. But   then he quickly realized it wasn’t a fish  − he was wrangling an alligator. That is   a great story. Bravo. I made sure to look  it up, white alligators apparently exist. And it does not stop at writing  essays. Now hold on to your papers,   because it can also write  computer code for us. Let’s see. It can finish our code that sorts a list, computes  the average of numbers, or even performs prime   factorization. But how does it do all this?   Is this some crazy mind-reading AI? Well,   not quite. It is very good, but it does not read  our thoughts. It reads our comments. These are   not part of the computer code, these comments are  a crutch to explain what the next block of code   is about to do. And, with that, the AI knows how  it can help us out. This feels like magic. Just   imagine how much easier this makes a bunch of  coding work for us. And what incredible speedup   in productivity we can expect from techniques  like this. Mind-blowing. What a time to be alive! But wait, of course, not even this technique is  perfect. Sometimes it just does not know when   to stop. And sometimes, it just keeps repeating  itself. Alright, we get it, little AI. Thank you! And, there is one more feature that I loved  when reading this paper. You see, normally,   you would need to train a separate neural network  for this fill in the middle capability. However,   the coolest thing here is that it can  generate continuous text from left to right,   just like the good old GPT-3 AI and  other variants. And, at the same time,

### [5:00](https://www.youtube.com/watch?v=263vx1g52eM&t=300s) Segment 2 (05:00 - 06:00)

this can also fill in the middle  without being explicitly trained   to do that. Oh yes! That’s right - we get  this amazing capability essentially for free. So, what do you think? Does  this get your mind going? Let   me know in the comments below,  and let the experiments begin! Thanks for watching and for your generous  support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13374*