# Luma Labs Stunning "DREAM MACHINE" Is Bigger Than You Think!

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=aYp_PD_-iLc
- **Дата:** 17.06.2024
- **Длительность:** 14:25
- **Просмотры:** 10,575
- **Источник:** https://ekstraktznaniy.ru/video/14244

## Описание

Learn A.I With me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
LUMA - https://x.com/CuriousRefuge/status/1801000223063306443 
https://x.com/LumaLabsAI/status/1800921382194413846 
https://x.com/amebagpt/status/1801234328225018021 


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Транскрипт

### Segment 1 (00:00 - 05:00) []

so this past week there was something fascinating released finally a SORA up based competitor that we could actually use this video is going to give you guys an in-depth guide on this actual software and a variety of different comparisons because you'll be quite surprised at what the competition is up to so essentally what do we have here ladies and gentlemen introducing dream machine so dream machine is an AI model that makes high quality realistic videos fast from text and images it is highly scalable and efficient Transformer model that trained directly on videos making it capable of generating physically accurate consistent and eventful shots dream machine is lum's first step towards building a universal imagination engine and luckily for us it's actually available to everyone now so I know that this video has been a little late but I still wanted to give my opinions on them because I do believe that this announcement is very worthwhile this is by far one of the most impressive text to video AI softwares that we can actively get our hands on compared to a lot of the other AI systems that we've currently seen in the recent weeks we saw around 3 to four different demos most of those being from China the only problem was that we could never use them now this one right here Luma Labs their new dream machine system is something that is very effective because it has a multitude of effects that are really impressive one of the first things that it's really consistent in certain areas and in this video I'll talk about the actual strengths of this and of course the limitations because some of the experiments that I've seen people do over the weekend slash this week have been truly fascinating because it shows us that we are truly about to enter a new era where there's going to be a lot more creative people who are allowed to express their creativity through these tools and I do believe that this is a very useful tool whilst yes of course there is the risk there I think this is something that is uh very useful for those who are creatively inclined so one of the first things I wanted to actually show you guys is the impressive consistency one of the things that can happen with Luma laabs text video is that the temporal consistency in the videos shows a very good understanding of the movement dynamics that occur when you're actually using video so this right here is one of the you know best examples that we can see because it shows that you know someone going into this environment not only does the camera angle look relatively you know stable and relatively well as it bounces you can also see interestingly enough and this is something that most people didn't really pay attention to is that the light kind of goes on as you are going inside now what right here it does get a little bit darker but along the edge right here what you can see is you can see a little bit that outline of the torch and I think those subtle things that you know this can pick up on it's really important to kind of you know pay attention to that because this is something that it kind of slips by a lot of people's um I guess you could say attention like most people wouldn't realize that but you can see as you walk more inside you can see that the Torches are more pronounced and the Shadows continue to conform so this is clearly just a model that is rather effective at what's going on we know now that when we look at these demos by Luma laabs that in comparison to other AI systems and I will get into that moment in a moment but in comparison with others I think what Luma Labs actually showcases is what we actually have here is an AI system that actually genuinely understands what's going on in the image for example right here you can see that this is a man walking so we can see that the man is you know walking up and down which shows us that this actually means that you know there's an understanding of exactly what's going on here and I think this difference is important because certain other you know video you know makers and I'm going to show you guys that too the the difference is that what we have here is we have you know something that understands how to move this you know character effectively rather than just doing a still that kind of rotate around the character we also have this one right here we can see literally a car and then of course it does a 360 around the car which shows us that it kind of understands the motion of this and it understands exactly what is going on there so it's truly effective to make sure that we can understand exactly you know how the motion is in these clips and of course get a visual look at the understanding between these because if we look at prior models and how they would have you know done this clip it would have been like this slow moving

### Segment 2 (05:00 - 10:00) [5:00]

thing but here there's this Dynamic understanding which shows us and of course this isn't Crazy by any means some people have clearly stated that you know maybe they just need to throw more compute at this but I do think that this one is really effective now there are also some you know uh you know longer versions here I do think that this was made firstly with mid Journey with the character consistency there or another AI system because you can see that of course whilst this does look pretty good the character consistency between this little blue creature and then of course right here there is a little bit of difference but overall what we can see here is a situation on our hands where we have a tool that is a lot more effective than the previous ones because Luma laabs AI is a tool that truly understands what's going on in the clip and then of course maybe it was just training on I don't know how it did that maybe it was trained on more data maybe they took a few things from video poet you know Google's new vo software they looked at all of those things there Incorporated those into this and then they've actually finally released it now I think this is absolutely incredible because this is the first kind of thing that we got that is actually you know finally released and it is something that showcases what is done when you know these other larger companies are put under the spotlight because they haven't released these tools yet and of course Luma Labs have managed to get to Market first even if their tool isn't as good as some of these other ones now I'm not going to say that everything about Luma Labs is best because of course there are some limitations we can see here that there is of course the morphing which is what happens when you have a model and then as you're rotating essentially what can happen is the translation not the translation but like how the model changes through that sometimes it gets lost as you kind of walk around so there is that problem and that does kind of occur of course we do have here as well sometimes in some of the movement Clips what we will have is a scenario where the movement just won't happen like the legs won't touch the ground they won't move but the back ground will as well so this is something that you know does tend to happen and we do see this from time to time in many different AI softwares movement is of course something that is pretty hard to solve then of course we have text is one that is notoriously hard for generative AI to solve I'm not sure what methods open AI are using they pretty much have solved it at this point but it is something that because of how generative AI is it doesn't really understand text as well so after coherency I do think that text will likely be solved the last that's kind of what we saw in things like mid Journey usually they focused on you know making the quality great first and then of course you think about the text lastly so that is something that I think will get solved but it's not something that most people will care about like if you had a really nice AI text generator but the text wasn't good but the video was good I'm sure that you would you know like that and of course this is where we have Janice which you know reference to two- faced you know things and this is like the kind of issue that you get with morphing as well where have creatures that can you know have like all sorts of different weird faces uh and they can immediately switch so we got the polar bear going to oneway like this and then we got the polar bear you know switching just like that which is pretty strange but it is pretty cool there are so many different things now before I go onto some of the current comparisons with some of the state-ofthe-art models I want to show you guys the pricing because I think this probably shows why this is something that still hasn't been released yet because for free we get 30 Generations per month which is pretty nice but you can see for standard we get 120 which is four times here and then for pro we get 400 and then for premere we get you know $500 a month now I think at this level this is something that is pretty expensive and I'm going to tell you guys why that doesn't mean you shouldn't buy it try it I'm just saying that uh this is going to show you guys where we're at in the industry basically what we have is a situation on our hands where we have the pricing for this being much more expensive than it is to effectively I guess you could say not go out and shoot a clip but maybe you could spend your money more effectively because what I've heard is that with 30 Generations per month if you do a lot of generations sometimes the generations don't come out well so you have to regenerate them so let's say you're going through five to six generations to get maybe one usable clip and if we take a look at the standard generations and if we can say 120 divided by 5 or six that's about 20 clips for around $30 that's not that bad but considering the fact that you need character uh consistency and all of that good stuff some people are stating that you know they need immediately like 400 Generations if they want a certain product around 2,000 Generations per month there was a really in-depth tweet that broke it down with that to why this still needs to be pretty cheaper before we do get like Mass scale you know video stuff but the point here is that you know film making is a very extensive process and of course I think this is going to be fun for the meantime for like Indie projects and stuff like that um and I think with Sora you know if we

### Segment 3 (10:00 - 14:00) [10:00]

take a look at Sora and those kind of things there I think because you have to do like you really do have to run multiple Generations a lot of the times um sometimes this can become tedious because it's a generative AI system it might not immediately understand what you're doing and of course with that you might be running through your Generations per month especially during you know certain periods like even right now if you go on to Luma labs's website it will say that things are pretty busy at the moment so this is going to be something that I think is a fun tool right now but doesn't seem to be a gamechanging film tool that many people will have you know used now one of the things that I wanted to show you guys was the comparisons I'm going to leave a link to all of these Tweets in the description below but you can see here that the comparison that the comparisons between Luma picker and Runway are pretty Stark we can see here that there's a deep understanding of luma being able to you know Pan the camera down and being able to move it nicely pabs right here you can see it's basically just still image with a few things moving around that it looks okay like I'm not going to say this looks completely awful you can see Runway as well there's no actual steps no actual movement there but of course with Luma Labs you can see that the character uh on this walking shot right here so you can see it understands that it's a legs uh and these legs need to be moved forward and backwards in order to make the character move forward so it's truly different here I do think that like in the future maybe in order to get consistency you might have to run five generations for every one image um and of course that's going to take a lot longer more compute going to be an issue but I think just the comparisons between you know Runway p and Luma I think that we can see here that it's pretty surprising that Luma and P no not Luma and Pica that Luma managed to take you know out Pica and Runway as the you know Premier state-ofthe-art video model that we can access for free so what this should show you is that AI you know is ramping up because a lot of other companies like I said before are now starting to realize that wait this is going to be a trillion multi- trillion dollar industry where billions and trillions of dollars going to be flowing through they also want their piece of the pie and I think this is why you know like they said maybe not exponential growth but you know the Dynamics in capitalism where you have companies that are always fighting to get to things first and to be the best for their customers I think we're going to see a situation on our hands where we get a lot more tools than we even know what to do with and a lot quicker than we know what to do with now there was this awesome you know example that someone did on Twitter I'll leave a link to this but they basically showcased the direct examples between Sora and Luma AI we can see here that Sora AI versus Luma AI the main differentiating factor here that we can have is of course the character equality one of the issues that we have here that you can all see is that Luma AI just doesn't have extremely high resolution Clips I'm not sure what causes this but I do know that with many video generators they do struggle to you know have clips that are up to 1080p of course it's probably something to do with the architecture but you can see that sora's quality like on Direct one-on-one comparison is definitely a huge leap ahead okay definitely like in terms of what you know kind of quality you'd expect so with Sora it's definitely pretty crazy like looking back at these because we look at these clips right here and of course this does look pretty cool but when we look down below we're like wao okay Sora the kind of dynamics that we have there you know the clips of the cat you know in a bed with someone I mean it's pretty crazy how far ahead open AI are and of course it's not even their main thing I do want to say that of course you know Luma laabs AI is a pretty going to be a pretty fun toall the fact that it can do all of this stuff but when we do look at Sora and how crazy this technology is like it's truly still even hard to believe that this is AI generated it's something that I think you know uh you know maybe with another iteration side we're going to see these other companies catch up because once again now whatever pabs or Runway we're working on they have to make sure that they now beat Luma Labs because if they don't then nobody's going to care and that's just the hor truth of this race but it seems like opening eye for now is still in the lead in terms of their video generation model so this was an interesting update I think this is something that's very effective let me know what you all thought about this and I'll see you guys in the next one
