with video and AI now with a new Sora model it's not public yet but are there any services that we can use for video I think like I'm not sure if Runway has a API access I haven't dwell in this direction much uh one cool example I saw somebody using GPD for vision was just
Поиск по транскрипциям
model. In other words, new and better flagship models out of the three biggest players. And then there's so much more interesting stuff like runway shipping, a feature that is essentially Photoshop for video. What the heck? Didn't expect that. And a music generator that sounds incredible. You haven't heard anything like it before. All of that
easily allow you to download video in 720p with remarkable accuracy now something that Runway didn't actually mention that I find really cool is the custom presets tab so if we hover over here on the custom presets and we click this area right here you can see that we have a custom presets area basically what this
container ship on docks loading containers not bad at all if you're just looking for a bit of cuteness in a quick short video at say 720p there are competitors though like Runway and paa but yes overall Sora is the best and especially at higher resolutions I wouldn't say it's that close just don't Bank
best open source video generator by the playground that they did so they did results where they pitted this video model against other video models and this even surpassed things like Luma Labs Runway and cling Ai and when they were actually talking about how they managed to train this model they actually spoke about how they focused on ensuring
create a brand new video right in here but what we ended up doing here is actually taking the voiceover and creating a video of our own with a combination of M Journey Runway gen free and the dubd voiceover that was created here and without further Ado here it is in a Whimsical twist of fate a trender Rose...companionship reminding us that simple Joys can unite us unexpectedly all right so there it is that's one example of how you could be using dubdub to create videos for social media or other purposes so go ahead and try dubdub for free with a link in the video's description and create your first videos within minutes with
this isn't some overhyped demo that only works in perfect conditions. This thing just ranked number one on both major video generation leaderboards. It beat Google's V3. It crushed Runways Gen 4. It made Open AI's Sora look like it was from 2020. But here's what really gets me excited. This isn't just about making
this isn't some overhyped demo that only works in perfect conditions. This thing just ranked number one on both major video generation leaderboards. It beat Google's VO3. It crushed Runways Gen 4. It made Open AI's Sora look like it was from 2020. But here's what really gets me excited. This isn't just about making
Hollywood just between directors and producers there is so much feedback going on in the post-production of any advertisement movie heck even if it's an event video I had clients that went back and forth 10 times and gave feedback over and over again and I had to adjust things so one points out here that yeah there...very closely over the last months there's one tool and one research that needs to be pointed out here okay first things first Runway ml the previous so to say leader in AI video a few weeks ago introduced a feature called multi motion brush tool which allowed you to use multiple brushes on the video to just animate
next highlight is a quick one on copyright and we all know probably that openai transcribed millions of hours of YouTube videos to power its whisper model did you know that Runway ML and Nvidia also Mass scraped YouTube but I thought some of you might be interested in the fact that people are trying to create business models
here that it's pretty surprising that Luma and P no not Luma and Pica that Luma managed to take you know out Pica and Runway as the you know Premier state-ofthe-art video model that we can access for free so what this should show you is that AI you know is ramping up because...pabs or Runway we're working on they have to make sure that they now beat Luma Labs because if they don't then nobody's going to care and that's just the hor truth of this race but it seems like opening eye for now is still in the lead in terms of their video generation model
vital Mission we are here to get stuck up we also had someone showcase exactly what's possible when you provide gen 1 by Runway with a perfect driving image and a very good video reference I do want to say shout out to this person that created this because it was definitely very creative and just goes to show
Nvidia just showed us a glimpse of the future at CES 2026. New chips that/ AI costs an open-source brain for self-driving cars and AI video that's already running on their nextG hardware. Let's talk about each one in detail. The cost of running and training AI is insanely high. Nvidia has just solved that with...means faster tools, fewer limits, and AI apps become affordable to use. And it's already being put to work. Nvidia just partnered with Runway Gen 4. 5, and it is now the first video AI running on Reuben. Now, that's the infrastructure. But Nvidia didn't stop there. For the longest time, self-driving cars have
able to generate scenes with these prompts, kind of like an artist painting a landscape that they have never seen before. It did not have access to similar videos in its training data, and yet, here it is. I love it. And, back to quality and coherence: in my opinion, we have higher-quality results, and coherence is really good...Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Now let’s compare it to previous techniques, this is against Runway’s GEN-1. We showcased this in an earlier video too. Hmm, do you see what I see here? Yes, it is not your eyes, the output seems a little slower, GEN-1 indeed generates
hassle of separate signups, extensive forms or payment on multiple sites and all that for a 15% fee. As someone who has been uploading and selling stock photos and videos for seven years let me tell you that's a good deal especially when generating some of these images takes seconds and then distributing them amongst the platforms will take...they would be great but right now they're kind of in this weird in-between spot. One more honorable mention should go to Runway. They're doing some fascinating stuff in text to video but their text image is just average at best. And I'll give them this, they actually generated the worst ballerina
need to like I said in the video previously they were under no pressure to drop this it's not like they were Runway or pabs they didn't have any competition in the video space so them dropping this I'm not saying it didn't make sense but it wasn't something that I felt opening I even
this but it can also generate a video normally I would say this is going to be huge but it already is you see Runway is already making use of nvidia's tools for amazing video editing magic it provides a holistic solution where we can find the boundaries of a person in a video and move him but only
through a field on a wooden path with fire on all sides and you can see that from the input video the generated video is honestly really good and I'm not taking shots at Runway here I think they've done something absolutely insane but this does look like it is a mere step ahead of Runway
shot list now we need to actually animate those shot list we need to bring him to life and make him into a video clip and we're going to use a tool called runway for that it's the leading tool right now for that let's go to Runway and I'm logged into my Runway account here
open AI just released a brand new model called sora it's a text to video platform their first and I think this is the biggest leap in AI since the release of the original chat GPT these are some of the examples that they posted on their blog post and I'm going...show you some other ones they're posting on Twitter and it is absolutely a mindboggling improvement over what is already existing Runway right now is the leading platform that takes text and makes videos and those videos are 4 seconds what open AI is doing here is they're creating 60c videos so chpt is a large language model