that it's slightly better at generating text than the flux model host posted by grock I then fed those images into Runway gen 3 Alpha along with a prompt of course and generated a 10-second video it's a really fun workflow and yes not quite photorealistic yet but I'll get to that again later
Поиск по транскрипциям
offering Runway amount they have recently updated their algorithms to the second gen and the text video feature now works amazingly well it works just like any prom bass AI just input a prompt and hit one button my prompt will be simple happy puppy eating a cake it takes a few minutes to generate a video and here
actually owns what now apparently stability had wanted to hold back version 1. 5 until they are ready to release it whereas Runway ml which is a company that makes creative tools makes a image editors and video editors that are based on AI has one and wanting to release this so they have released it and after they
grabs attention the first 10 seconds, and a content structure for 15-minute video. And watch this. Gemini is processing my voice input, understanding the context, and generating full video plan, titles, hook, structure, everything. The response is instant and it's comprehensive. This is perfect for brainstorming on the go when you don't have time to type...feedback on your notes. So, that's where we are right now. Gemini Live Mode is ahead in realistic conversational AI, while Runway 4. 5, VO 3. 1, and Sora 2 are catching up fast in AI video consistency. And the race is nowhere near over. Honestly, the pace is insane. Six months ago, none of these tools were anywhere
given me the viral format. China versus OpenAI. So, right from the start, it's given me the China angle, runway gen 4. 5, perplexity supermodels, and as you can see, everything is from the video itself. No outside information, right? Write a nice personalized POV humanlike LinkedIn post. Now, let's see if it gives me a LinkedIn post
Runway Gen 4. 5, Sora 2 Pro, VO3. 1. Your feed is fluted with insane AI video demos that look like Netflix quality productions. Everyone's hyped. Everyone's watching. But here's the truth. Most people still haven't made one complete AI video from scratch. Not one. And it's not because the tools
only text to video or image to video but what they announced here is editability so you can alter the subject style selectively you don't have to redo the entire shot which is essential and while this is great over time I expect something like runways Gen 2 capabilities which are currently most advanced in the entire industry when...these tools including dream machine Eng gen free once we get our hands on it if you're not familiar Runway has been adding tool after tool and there are just so many ways to alter the video that you generate they have video to video background removal they have brushes where you can selectively change or animate only
list is Vigle AI. This lets you animate characters and objects into existing video clips. And it's super simple to use. You could technically do this with things like runway act 2, but this makes it so much easier. Let me show you how it works. So, under the mix tab, they have a motion tab and they have
roundup, but the main thing that is being talked about right now is C Dance 2. 0. This is a new video model from Bite Dance, and it's nuts. I've noticed that some AI models like runway 4. 5 focus on trying to make things cinematic or feel realistic, but Seance 2 here is more on the Sora...that stands in its own but also theoretically could be used for longer workflows. You can do things like upload a comic and then have it transplanted into a video immediately just from one image. It's pretty insane. And a lot of people are saying this is better than Sora 2, better than V3. It is far less censored
think it's gonna be interesting because um out in painting this is something that you know any current text to video model doesn't currently have so I think that once these papers are released tools like Runway um and other companies that are working on stuff are going to get updated as well which is going
Yudio, Lirya by Google was um introduced recently only as a research preview but they only generate audio. Then we have video generation models like VO and all the Chinese models and like cling and then we have runway that was always used to lead the pack and then many players there. Then we have the image generation models, right...prompt and there's many improved things about the model like improved physics and realism and things like that, but mainly just check out this little video of a cat with a hat typing. This came straight out of V3. That's just great. That's just perfect in terms of sound design. The sound effects are spot-on. There
кадр". Крупный план: интерфейс киберочков. Сканирует кусок пиццы, зелёные цифры, анализ данных. Картинка есть, но видео лучше картинок. Я иду в Runway или в клин. Это нейросети, которые оживляют фото. Загружаю нашего парня с пиццей и нажимаю Generate Video. Выделяю пиццу и рот, чтобы он как бы кусал. Загружаю интерфейс, ставлю настройки. Камера zoom in. Так, теперь звук. Мне нужен
then you raise the paw. It raises the paw. It is insane what can be done now. And um with runway you can now take prompts and images and animate them. So for like trailer kinds of videos, pretty good. Pretty good. You just highlight areas and you say move this and it just figures it out. And you know
various applications. I mean things like changing image styles is not novel. Others are more interesting like removing objects in videos. It's just making things that were possible before more userfriendly. And I think for filmmakers, these runway apps are something you should definitely explore. On a similar note, they also released this workflows view. We've seen this
tracking water and smog codef ensures seamless changes across the entire footage so essentially what this is if you remember runway's text to image where you could simply put one image in to an already pre-recorded video and then have that video change based on what that input image is so essentially set a style image and then...this is going to be added into future software and this is going to make for really interesting videos so you can see right here uh no wait where is it stylization yeah this is what I was talking about from Runway where you have stylization and I do think that their stylization is updated and I think what
future that can be played at Real Time with Incredible Graphics thanks to artificial intelligence that's what today's video is going to dive into with a thought-provoking precedent for the future so essentially with runway
show I mean heck open eyes started rolling out the advanced voice mode there's a new MID Journey version Runway shipping brand new features meta released this incredible tool that can cut out anything out of a video and so much more I got to say this week was really packed with releases and me and the entire team
have Runway showing off Act Two. If you're not familiar, Act One was their software where they took a video and then use that video to inform AI video generation, so you could turn yourself into different characters. Act 2 is building upon that, making it better. There's a really fun one minute launch video if you want
right now of course what we've seen a lot of the last couple of years is like Sara Runway P all these companies doing like amazing work on but I think most people think of like text to video right and so when I look at a space like ours I look at them and I think you know
only other companies that I will come back to in terms of comparisons is of course Runway this has been the number one company in terms of what people have looked at for text to video in terms of what is the golden standard and they have you know four different versions text to video text and image to video