OpenAI GPT-4 - The Future Is Here!
8:05

OpenAI GPT-4 - The Future Is Here!

Two Minute Papers 24.03.2023 213 929 просмотров 10 098 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out Anyscale and try it for free here: https://www.anyscale.com/papers 📝 The paper "GPT-4 Technical Report" is available here: https://cdn.openai.com/papers/gpt-4.pdf More here: https://openai.com/product/gpt-4 Try it out (note: the free version has the older GPT-3 as of now) https://chat.openai.com/chat My latest paper on simulations that look almost like reality is available for free here: https://rdcu.be/cWPfD Or this is the orig. Nature Physics link with clickable citations: https://www.nature.com/articles/s41567-022-01788-5 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Edward Unthank, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai #gpt4

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Finally, OpenAI’s legendary language model AI, GPT-4 which also now powers their ChatGPT assistant, is here. And my goodness, they promise a lot of goodies. Advanced reasoning. Complex instructions. More creativity. This is going to be an incredible journey, so hold on to your papers right now, Fellow Scholars. And, get this, it can also accept image inputs. Let’s put this one to work immediately. Dear GPT-4, imagine that we have these ingredients. What can we make from them? Note that the input is not a text anymore, but a photo. That is excellent, and don’t forget, if one of these caught your eye, you can also ask follow up questions here on how to exactly make them. Now, earlier we looked at some preliminary results of an IQ-test, where it scored 147. However, I noted immediately that we should be good Fellow Scholars and take this with a little bit of skepticism, and wait for a more formal examination. And, yes - here it is! It finally happened. GPT-4 give us human-level performance on a variety of academic benchmarks. And they report something absolutely incredible. Look. This is what its previous incarnation in ChatGPT was capable of doing. 10th percentile on the bar exam. And now, maybe 30? 50? Nope, 90th percentile. This means that the new one is within the top 10 percent of test takers. That kind of improvement in just one paper is outstanding. And, I know you are looking, and the answer is of course, we are going to talk about this too in a moment. And this is where being able to process visual information means an incredible breakthrough. It doesn’t just mean that it can help us with the ingredients, or can explain memes. Although that is quite helpful. But, it gets better. For instance, look at this one. Yummy. On the AMC mathematical exam, GPT-4 marked with light green performs so much better than the previous version. However, here is where the magic happens. Look, this exam contains some light visual information, and when we add the capability to process this to GPT-4, it does so much better. It is rapidly improving to match human-level results. We are one step closer to human-level intelligence. This work is history in the making. And, I was also very excited about this. The USA Biology Olympiad exam. This needs next-level capabilities, because it requires visually inspecting and evaluating electrocardiograms, and in general, taking in a ton of visual information and giving correct answers. So how correct was it? What? Are you kidding? 99th percentile. Better than almost all humans. Wow. However, wait a minute. I see light green, not dark green. What does that mean? I mean that is the non-vision version of GPT-4. But this exam has plenty of images. So, what happened? Well, the AI got a little help. The images have been transcribed by a human. I would love to see those transcriptions, I bet it was not an easy job. So, it seemingly cannot perform this level of visual inspection yet, but I am certain that we will see something like this one or two more papers down the line. Perhaps with GPT-5. If that sounds interesting, consider subscribing and hitting the bell icon to not miss it when the next paper drops. Here is also an example that I absolutely loved. Little AI, explain the plot of Cinderella and each word has to begin with the next letter in the alphabet from A to Z. Wow. This one is tough. I have to admit, I am not sure if most people would be able to solve this in a satisfactory manner. And…look at that. Flying colors. My goodness. And this is one more really cool example from their official demo. What?

Segment 2 (05:00 - 08:00)

Are you saying that this crude drawing for a website comes in, with some jokes and buttons. And it not only creates this website with proper javascript, but it also understands that it needs to fill in the jokes, and it does that too. Loving it. So, when can we try it? Well, OpenAI deployed GPT-4 to their ChatGPT Plus subscribers, and they also made a waitlist for API access. Now, I was lucky enough to have access to it, thank you so much, and it already helped me remember my discussion with a geologist professor from a few years ago. There were some details I did not remember, and now I do. Thank you! So, who is using it? The answer is that everyone is using it. It can help us organize a huge knowledge base, or it can help us learn a new language, and so much more. And this is tech transfer, so in other words, from paper to product, not in decades, but in a matter of days, and in some cases, even a matter of hours. What a time to be alive! And if you have been watching Two Minute Papers, you see the whole history of these works, from GPT-2, to GPT-3 and now, GPT-4. You saw history unfold right before your eyes. Thank you so much for being on this incredible journey with me. Now, I will also note that the paper is really detailed in terms of the evaluation of the results, but a little light on the inner workings of the algorithm itself. I hope we get to know more soon. Thanks for watching and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник