# OpenAI’s Sora Is Here, But...

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=1BDqfPEhsCA
- **Дата:** 10.12.2024
- **Длительность:** 6:31
- **Просмотры:** 91,521

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

Sora is available here (for select countries, not in EU currently):
https://openai.com/sora/
https://sora.com/

Earlier video looping paper: https://sites.cc.gatech.edu/cpl/projects/videotexture/SIGGRAPH2000/index.htm

📝 My paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky,, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

My research: https://cg.tuwien.ac.at/~zsolnai/
X/Twitter: https://twitter.com/twominutepapers
Thumbnail design: Felícia Zsolnai-Fehér - http://felicia.hu

## Содержание

### [0:00](https://www.youtube.com/watch?v=1BDqfPEhsCA) Segment 1 (00:00 - 05:00)

OpenAI’s Sora is finally here.   This is an amazing AI that takes   a small piece of text from you, and  yes, creates an incredible video. The first results were showcased earlier this  year, but I am a little worried. I am worried   because since then, a lot of time has passed, and  we saw many competitors actually release their   models in the meantime and I am not sure if Sora  would still be good by the time it is released. You see, Luma Labs has such an AI, cheap  but okay quality, not super detailed,   Runway, a bit more expensive, much  higher quality. Kling and so on. The competition is strong and you can use them  right now. So, by the time Sora is released, can   it really compete? Well, these results are already  an indication, but let’s have a look together. Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Well, my first favorite is of  course, the best of the good boys,   Super Pup. In the world of runaway inflation,  the world needs you Super Pup! And from now on,   every single video that you will  see here will be AI generated. Sora can also create crazy things like a dragon  made of shampoo bubbles. And as expected,   it is extraordinarily good at  creating and animating humans.    I had the honor of trying Sora a bit more than  half a year ago and I was stunned by the results   I saw, and I am so happy that I can finally share  this beautiful moment with you Fellow Scholars. You can now make a family of extraordinarily  civilized bears eating sushi. And interestingly,   look at this carefully. Have you noticed? We  talked a great deal about how good it is at   object permanence, things that disappear and  appear again are typically the same, however,   here, some golden retrievers fail some of the  object permanence tests. I will slow this down.    Have you found any appearing or disappearing  dogs? Let me know in the comments below,   but I’ll give you a real good one.   Look at this good boy. And then,   it becomes the read end of a different good  boy, or maybe the same, I don’t know. Crazy. But it does not end there, in fact, this  is just a start. Now comes the best part   where we look at 4 amazing features and one  not so amazing feature that it also has. One, now hold on to your papers Fellow Scholars  for remix. If you are happy with some aspects   of your results, you want to keep it, but  only replace the doors, it has to open to   the same scene, not a problem. Or if you wish  to remix the interior, that is also possible. Two, re-cut. You don’t have to accept the whole  video as a result, just choose the parts that   you like, put them on a timeline, and fill what  happens before the good part, and maybe after too. Three, blend. Throw two videos together,   and you’ll see that snowflakes and  flower petals go really well together. Four, style presets. Here is the original video  of a nature documentary, also AI-made, but if you   would like to re-create it in a different  style, I love the papercraft version, the   archival version has a nice analog warmth to it,  and so on. If something caught your eye, you can   save the style to reuse later, thereby enabling  you to create a movie that is not just a bunch   of footage cobbled together, but something that  follows a nice, overarching design. Fantastic. And plus one, is this amazing or not so  amazing? Looping videos. I would say this is not   so amazing as they have these sudden jumps  within the video, and if there is such a thing,   you might as well copy-paste the video many  times and get a similar jumpy result. This   is a bit better, but as a research scientist, I  can’t resist mentioning this research work that   was done almost 25 years ago and could  make non-looping videos loop. Stunning. Overall, I love the fact that the Sora  team did super good on the research side,   but at the same time, not neglect the  engineering aspect of the product. Very   few companies can do the two at  the same time. That is excellent. So, where can you use it, and can it  compete? Well, it is now part of the   ChatGPT plus and pro subscriptions, it is  pricy, there are heavy limits on usage,   I think they even had to disable signups  for a while because so many of you Fellow   Scholars were interested. If you ever listen to  an entrepreneur talk about product market fit,

### [5:00](https://www.youtube.com/watch?v=1BDqfPEhsCA&t=300s) Segment 2 (05:00 - 06:00)

this is it. They literally can’t get  enough capacity to keep up with the demand. Yes super pup can compete, and yes, it’s  pricy, but I think less than a year from   now you are going to get this kind of  quality for free on your own machine   with local models. Possibly even better, and  possibly even earlier. What a time to be alive! Not so long ago, we were only  able to create images using AI,   and not even very good ones. And now, we  got video that is so much harder. Every   frame has to relate to the previous one,  just one small mistake, and the illusion   is gone. And this is just Sora version 1. This  is the worst version if you will. Imagine what   the third version will look like. We will all  be able to become film directors. My goodness. So, what do you think? What  would you Fellow Scholars   use this for? Let me know in the comments below.

---
*Источник: https://ekstraktznaniy.ru/video/17220*