пользователи делают реально шедевры да есть у нас вот интересные участники которые там буквально чуть ли не фильмы уже делают там социальные ролики всякие это дико классно то есть Runway с полнейшим функционалом и хочу кстати напомнить что вчера они выпустили возможность делать портретные генерировать портретные видео то есть не ну для Инстаграма скажем так да хотя Все ждали все...история то есть по сути как бы я вот остаюсь как я есть как вот сейчас вы меня видите только я становлюсь стеклянным Единственная проблема модели Video to Video от Runway в том что она меняет лица то есть а если например я захочу себе там условно не знаю макияж сделать да то лицо у меня будет другое это проблема
Поиск по транскрипциям
know what world models are, which is essentially how these AI systems manage to think about AI systems, you might want to take at this video by Runway because they did a very good job in this video of talking about how language models are essentially going to need a world model in their head to truly understand...does this all mean? It means that pretty soon general world models will allow us to simulate worlds that more closely reflect our own. That video I included was a small section of the runway general world models and I think it goes to show exactly where we are headed and this video was released a couple of months
frames feature that morphs one image into another creating some wild transitions super Innovative and fun to play with generating videos isn't the only game in town you can edit them too my top choice for that is Runway ml this tool is amazing at sinking lips to audio...removing backgrounds adding slow motion color grading you name it you can remove objects blur faces add depth of field effects it's basically a giant playground for video editing Runway won't generate a full video for you from scratch though and nid is your best bet in this case it'll take one prompt and make an entire
going to skip. Both because lift physics is something people really like to argue about, and this isn't a physics video. Yeah, I'm recording a runway video right now. Hope you don't mind. But the summary is planes take off and land into the wind because it lets them use less runway and makes for safer...little wind sock and direct the pilot to the runway that paralleled the sock and thus the wind. But as aviation grew, so did airplanes and thus their runways, making the triangle trick impractical for big urban internationals. But the simple sock still solved it. See, if you record when and which way the wind sock blows and with what
with Gemini their entire thesis was uh it's pure multimodel right and it's using Transformers but it's pure multimodel it can do everything text audio video Runway came out with a new video I don't know if you saw it recently they came up with a trailer yeah the world model General World model right which
quick look at multiple stories rather than actually testing them or discussing them further. We have a very colorful docket here. Starting with Runway's game worlds. Runway, the video generation company, yet again looking for another way to use their models for something productive. In this case, it's the image generator. And basically, it allows you to create
this suite, note once again - text to video is coming later, you can sign up for free on Runway’s website. The link is in the video description. I will note that Runway is a previous sponsor of Two Minute Papers, and had nothing to do with this video apart from letting my try it and answering my questions. This
well too all the Adobe Suites are something we use at and I started with image because I know you speak to a lot of designers for videos we use Runway we like Runway a lot it's particularly it's still early we think it could go somewhere else and we use some of the Avatar generation ones like
just send it to three different nodes one of them for Luma dream machine image to video one of them for cling image to video and one of them for Runway gen free Alpha image to video now we don't have Mini Max here yet cuz that just released it's just a question of time until they release
know that because, uh, you know, this article, it was basically saying he's fitter than LeBron James. just Anyways, if we go to the text to video leaderboard, there was runway gen 4. 5, a new video, you know, model that was surprisingly better than V3. Genuinely surprising. I'm surprised. I said that three times...with colorful stalls and lively characters bering with vendors. The vibrant colors and energetic scene capture the cultural richness and excitement as the m market bustles with life. So, runway gen 4. 5 at the top left we can see looks pretty good. A VO3, no audio also. I mean, all of these genuinely do look pretty good. I think
tell the difference between a real video versus an AI generated video with Runway 4. 5. Okay, I am up for the challenge, folks. All right, I'm going to say right off the bat, the right one is AI. What? I already got this wrong. Are you serious? Okay, hang on. That's much slower. That...pretty incredible. What's really nice is that confirmed by Runway's CEO themselves, this model is going to be getting audio support in the future, which is going to put it right up top there with the likes of the other leading cutting edge closed source AI video generators. All right, we're going to stick with the theme
model is capable to do that thing. Uh now we are passing text and we are able to generate the video. Yes. Uh that thing is also possible. So we have several model like openai sora right a runway uh and uh even a couple of model in the gymn API you will find out the text to video...thing uh as I written over here. Now text to video, video to text and video to video is possible. So we're talking about text to video guys. So again runway uh pika labs and openi sora then of video to text again open 40 gimny vision model can perform this task. Now on the other hand
workflow, and that is Open Art. Instead of juggling multiple subscriptions and learning different interfaces, Open Art gives you access to all the major AI video models in one place. V3, Cling, Hyo, Runways, Models, they're all there. But it's not just about convenience. Here's what makes it powerful. Instant model comparison. You can run the same...expensive, but the results are genuinely cinematic. If you need speed and volume, Hyo AI for quick content or cling for character focused videos. If you're editing existing footage, Runway LF is currently the only game in town for AI powered video editing. If you want to experiment and learn Juan 2. 2 if you have the technical skills
that we had was you know Pico Labs essentially something that is very on par with Runway which is a video editor slash AI video generator essentially what they released in their newest update was camera angle so if you are able to use your camera angles before now essentially what you can do is you can do many different...research papers were announced and it does seem they are already on par with Runway which have been you know in the game for quite some time so I would be excited to see if other companies are going to be working on text to video because I do believe that is the hardest thing and if you've been
every content creator makes videos. Let me show you exactly what I mean. So, I've been testing AI video tools for months now. I've tried everything. Runway, PALabs, Stable Video, you name it. I've probably burned through hundreds of dollars testing it. But when I heard about this new V3 fast model, I had to see what
what's next. All right, so let's have a look at this week's quick hits. First of all is another AI video tool, Runway Gen 4. 5. It looks good, but honestly, at this point, it's kind of hard to tell these models from each other. I mean, some of the demos are super impressive. Runway particularly
search up Runway AI tutorial and you start to go through it and watch you see what people are doing with it and you see best Runway AI use cases or best video Generation Um best AI video generation use cases and you find them either on uh YouTube is a great place to do it but some of these...play on with it you've got familiar with it and you're like okay I know how this works I know that Runway has this feature that last going to go from image to video okay great that's Now sort of pined that and you're building your knowledge Gap just by watching videos and having a player
search up runway AI tutorial and you start to go through it and watch you see what people are doing with it and you see best runway AI use cases or best video generation um best AI video generation use cases and you find them either on uh YouTube is a great place to do it but some of these...play around with it, you've got familiar with it and you're like, "Okay, I know how this works. I know that Runway has this feature that allows me to go from image to video. Okay, great. That's now I've sort of pinned that and you're building your knowledge gap just by watching videos and having
whole space forward. All right, so one of those would be Runways Gen 2. So let's start our little journey through the AI video landscape right here inside of Runway ML's web app. And if you follow AI at all, you will have heard of this. They were the first ones making major waves with AI video that...this has massive limitations. But so did AI images a year ago. Am I right? And as you'll see later in the video, we're really getting there. But it all started with Runway. Early this year, they released their first foundational model, Gen 1. And when I say foundational model, you can think of these as GPT4
have video I do think that video is definitely one of the hardest things to do at the moment because even companies that are completely focused on video like Runway and other companies like pabs haven't completely solved the video challenge yet although there were some recent breakthroughs that we will discuss I don't think video