# Creating Cinematic Video With AI (Nano Banana Pro + VEO 3 Full Guide)

## Метаданные

- **Канал:** Luke Alexander AI
- **YouTube:** https://www.youtube.com/watch?v=76kDOClvY3U
- **Дата:** 26.11.2025
- **Длительность:** 29:32
- **Просмотры:** 1,447

## Описание

How to make $50k/month with AI Operating in 2026 (Full Training): https://youtu.be/3Xv0f4zN12g Nano banana pro just released and is making AI video generation easier than ever. In this video I break down how to use nano banana pro and veo 3 to create cinematic videos, clips, and even movies

How to make money with AI in 2025: https://youtu.be/Tx8g3vlKd64

Learn how to make $30-50k/month with AI: https://www.goinsiders.ai

IG @lukealexxander
X @lukealexxander

#ai #n8n #aiautomation #nanobanana #aioperating #aiagency #lukealexander #lukealexanderai Want to learn directly from Luke & a team of AI experts? https://whop.com/ai-insiders1/

## Содержание

### [0:00](https://www.youtube.com/watch?v=76kDOClvY3U) Segment 1 (00:00 - 05:00)

In this video, I'm going to show you how I made this movie quality edit with two tools that are either free or very cheap to use. We're talking about Nano Banana Pro and Gemini 3's newest update. So, first off, let's take a look at this video because it's absolutely insane. over lonely. I can fix that. I think you're a real boy. Absolutely insane that you can make this with AI. And just to show you guys that I'm not lying, I will literally pull up the Final Cut Pro uh project here and show you. And then we're going to get into exactly how I did this because there was a few tricks that I don't think a lot of people are doing right now. So, the only thing that obviously was added was some sound design. That's what goes into good video anyway. And then I added a filter here that you can see. Uh, I added a grain effect because it just makes it look more cinematic and it also hides some of this like little tell of AI video. Um, but it's absolutely insane. So, let's get into this. So, the first thing that you have to do and understand when using Nano Banana is let's say you want to do what I did and put yourself in, you've got to get really good photos. Now, I can't find the original prompt that I used. I think it like went back and you can see here obviously I found the same Altman picture on Twitter. That's where I had the idea like, "Oh, I love Bladeunner. " That's what this movie is from, by the way. This one's pretty decent, but it's not super photorealistic. It's a little bit off. So, this was the inspiration. Now, I just went back and forth and I kept giving it different pictures of myself. So, like this looks like a combination of me and Ryan Gosling if we had a baby. Uh I think this is the one where it gave it to me, but it it just wouldn't do what I said. And you can see here, I tried, I couldn't get it perfect. So, basically, all I did was I gave it this photo of Sam and then I just kept sending photos of me and I'm like, "No, make it look like this person. No, person. " And eventually we got to this picture right here, which is literally my face. Like, my hair's kind of wet. It's pretty close. Um, that's the closest I've ever had an image get. Now, Nano Banana just came out with a new update. Oh, here it is. right here. So, it's pretty insane. I even uh turned myself into Batman. Now, one little hack for you guys is I've gone ahead. Ah, there it is. So, this is what we got. Obviously, not me. I said, "Hey, here's a different image. " Now, two things when you're doing video or content. The prompt engineering rules that we talk about on this channel still apply, but you got to do it in a visual sense. So, I'm not going to be necessarily giving this big JSON file. know I can if I want to create something from scratch uh into flow or v3 but what I found works better is just make the image you want and then use the image as the visual context. So you can see here this is close but it's not perfect. Why? It needs more visuals. So I gave it this photo I took on a run. It's a below image of an angle. And then this from another run it's more of an upwards angle. So the AI should with those two angles have enough to create what we want. And therefore obviously it did. Um now from there it made me Batman and I gave it more images and I said hey save these to memory and this is Luke Alexander and you can see it's done a pretty good job. So once you have the image you want. So let's say you're going to start off with you you're going to make yourself. So once you have the image you want you're going to go here to flow. Now it says they're experiencing high demand right now. So I don't know if this is going to work. Um so you have two options. You can do Google studio which is like the AI studio here. So you basically have the LLM Gemini, but you can do image or video. I like Flow better, and I'll show you why. So if I log into Flow here, here's the project from literally all of the sequences that I did. And I can show you guys this. There's a lot. Okay. Um, now this is from Bladeunner 247, the scene. So don't think I'm like lonely and I need an egirlfriend. Um, but when you start with a visual, you basically want to create all the visuals that you can think of. storyboard and mindm map out the video you want to make. Make each scene with nano banana. So if a scene starts like this and ends like this, you want to have those two images because the more visual context, the better. So you can see here, this isn't perfect. That doesn't necessarily look like me. And that was the after image. So like that's a crappy result, right? But some of the better results that we have like this one I think

### [5:00](https://www.youtube.com/watch?v=76kDOClvY3U&t=300s) Segment 2 (05:00 - 10:00)

like that angle that side profile is nuts. Okay, it's because of how good this original starting image was. Like even that. So remember it had the downward angle, it had the upward, it has the side. So it can create sort of what I look like. Now for the other scenes in this video, for example, this scene from Bladeunner where he's like looking up at the holographic girl and he's like talking to her, whatever. Look at this one. Where's the original? Uh, what's this one? That one is kind of inappropriate, which is kind of insane. See, even these were not good. That's not the same girl. So I had to reiterate over and over and over of just like literally hitting reuse prompt here. Okay, that's the next trick I want to tell you. So I could not figure out how to get her standing beside my character like you see in the original video, but V3 did this. So it got me this far. And now what you want to do is literally just screenshot this. So, I would screenshot this and then I would share that image and I would go do this frames to video and I'm going to upload that video and you or that screenshot and you can see this is exactly what I do. So, I started with this. This is the original image of me that dictated the rest of the video. Then I got this wasn't close enough. That was the wrong girl. I screenshot these scenes from Bladeunner from the actual scene that I wanted but I made it the new girl. It's basically on a day aris still. Um, and I got around copyright just by little tweaks into the prompts. And then by giving it that screenshot, this is what we were able to get to this like final where she's like she kisses me on the cheek. Uh, sounds so weird to say, but you get what I'm saying. So the process of doing this is get the image you want, feed it to uh, V3 or flow, whatever you want to call it. try to prompt engineer with all the prompt engineering tricks that are on this channel like assigning a role, assigning an aesthetic, camera movement, ultra cinematic, ultra realistic, and then when you get something that you like that is more similar to where you want to go creatively, you screenshot it, you give it back. Okay? It's just like when we do prompt engineering and I get a prompt out, I give it back to the LLM and I say, "Hey, rate this out of 100. Give it back. Make the changes. changes. " It's the same thing but visually. Okay? All LLM's at the end of the day, like if you're going to get really good at prompt engineering is going to be like this. And so, and so if we look at this, I literally mapped out all of these scenes. And then even at the end, I had an idea where I was like, "Oh, I would like to have the car, like I really like the car scene. " And so, let's look at the ingredients to this one. So, first frame, where do we get that? Well, we got that image. I tried to do it just inside of here, and it wouldn't work. So, I got this image. I said, "Hey, generate an image of this guy getting into a cyberpunk car. It wasn't the right aesthetic. " And then I screenshot the game uh Cyberpunk because I like that car. Still, it didn't get it. You see, like it's making an amalgamation of things. Not quite what we need. And so, finally, I went and I found this picture on the internet of Ryan Gosling from the actual movie. He's walking away from the car. So, because this image is so good, it does 80% of the visual context for it. And so, I just said recreate this image, but with this person with a band-aid on their nose so that I'm identifying like which person, it gave me this, not perfect, but 99% of the way there. Then I said, make it nighttime and rain instead of snow and add neon lights. Money. And so, this then was taken into flow like I'm saying here. And we got this damn good. That's pretty good. I think we got this one as well. See, this one was bad. Here's an example of what not to do. This was simply just that original starting frame and it just missed it. There was too many other things that it didn't have visual context for, like what's a cyberpunk car? What's the car look like? What's the city setting look like? It didn't know. And so you can see it just like messes it up. Like it's not me. It's not the same outfit. It's kind of goofy, right? But when you give it the visual context that I then did with the screenshot, here's the result we get. Perfect. Okay, that was absolutely awesome. And then I tried to get one walking up a set of stairs, but again, there's no behind image of this character. So look, it doesn't really look the same. I wanted this like establishing shot. it just didn't do it well. But this is literally at a high level. I don't want to get like too complex for you guys. You basically come in here and you do this. So, let's do one live together. Here's what we're

### [10:00](https://www.youtube.com/watch?v=76kDOClvY3U&t=600s) Segment 3 (10:00 - 15:00)

going to do. So, this scene I was struggling with this uh to make it work. I couldn't really get it to work. We're going to restart and we're going to try again and I'm going to show you guys live. So, I'm going to go here. I'm going to click create image. And I have ultra pro. I don't know what all planes they have. I have the highest one. Um if that's important. So, did I screenshot the snow picture or did I download it? I think I downloaded it. So, sorry. I have so many photos in here from doing this last couple days. Um, I don't see it. Date added. Okay. So, we're going to start from scratch. So, uh, Bladeunner snow scene. This is the meme that everybody like makes. So, I'm going to just grab a bunch of these. I need the highest quality one that I can find ideally. Now, the hard part for the AI is there's not really a face in this. That's going to be hard for it to do, but there is a face here. So, I'm going to save this image. That's a good reference visually for the AI to go off of. Um, and then the final bleed out scene that I want Well, this is a GIF or a GIF, however you say this. How do I get this to go away? I'm going to save this and see if we can feed that to it. Save. Is there a higher quality closeup here? These are all blurry. Okay, here we go. That's a good one. Save image. Ah, it's not great. We need as high resolution as we can get so that we don't have any of these like artifact errors. H, that one's blurry, too. Oh man, all of these are blurry. Blurry. There we go. Save image. Okay, so now we're going to go back to Gemini. And so we got to give the really good visual example of my face as Ryan Gosling. Okay, there it is. So, we're going to grab this good image of me. We're going to upload that. And then we're going to grab these other ones. Now remember, we want to do the facial one first because it gives the most context and it's also similar to the setting. So I'm going to say face swap the person from the neon lighting image into the person in the snow. maintain perfect character likeness and adapt it to the colors of the scene. What I'm thinking it might do is make the neon into the snow, which we don't want. So, we're going to see what this says and we're going to get a good starting point here. Okay, so this is what happened last time. Now, what we need to do is we need to add more visual context. This is what I did last time that ended up getting a good result for me. So, let me find one of these pictures here that I just have from my phone. Okay, this is the mistake it often makes. Like, why are we doing this? So, I'm going to yell at it. No. Add this person's face to this image. See this one? The lighting is off, the face is off, the eyes are off. That won't be a good starting point. We want the starting point like I'm trying to say to you guys like to be as perfect as we can because that is the visual context. The same way we give like context engineering and things. So, let's see. Ah, it did the opposite. You did the exact opposite swap that I want. Try again, but swap the other face face. Maybe by saying Ryan it will work because obviously the AI is Nano. Banana, by the way, I've noticed it searches the internet for things. So if you like say to do something, it will basically do LLM work in the sense of images. So it will search the internet. It'll search its data. Uh all the training conditioning that it has. That's how it makes such good images on top of just the visual reasoning. So all you got to do, guys, like I know this is like you're watching this video and you're like, "Oh, well this is helpful. " This is how it is. You just got to go back and forth until you get the result you want and then it's cake. That one is a lot better. And then we're going to repeat the process with this as the reference point. Does he have in the snowy picture, does he have like the blood and band-aid on his face?

### [15:00](https://www.youtube.com/watch?v=76kDOClvY3U&t=900s) Segment 4 (15:00 - 20:00)

I think he does. I don't know if that's going to throw it off. Close. Make it more like this face. Keep the details like the band-aid on the nose and the blood, etc., but match the snowy scene and collar profile. We'll see if that works. That's pretty good. I think that will work uh for what we want to do to be honest. So, we want to download this. Now, I think that's like 95% of the way there. He's not necessarily looking at the angle the same way. So, now we want to actually give this photo back to it. So, here's the updated one. And we want to recreate this scene. Well, that's a GIF. or GIF. So, give it that one. Uh, we'll do one at a time, actually. See if we get that one. Now, we have our shots for giving this to Flow and building out the sequence like I'm going to show you in a storyboard that it has. Um, and if it does this one, well, we're going to have a movie quality clip and it's going to look just like the one from Bladeunner and we can make a meme about it on the Instagram. close. He looks like he's standing up on the steps. So, we need the lay down like little details will matter a lot for the image. Now, the thing that V3 doesn't do is it doesn't really infer when you do a frame to frame. It'll do exactly what's in the image. So, I tried to Photoshop like a person into a scene with that girl and it literally just used the photo and then animated the photo. So, it's got to be perfect in the image step or the rest is going to just look funny. It's struggling. You're going to click new project. Now, we're going to do frame to video. We're going to provide the image that we liked here. I don't remember what happens in this scene in Bladeunner. So, you're going to want to cut off these borders or else you're just going to have them. And sometimes the Gemini watermark will stay. What do we want to prompt this? So I found V3. You don't have to do crazy prompting unless you're uh not providing a reference image. But when you provide a reference image, you can be looser with the prompt. You can do more plain language. So, I'm just going to say uh animate or actually uh the man on the right leaves the old man and goes and lays on snowy steps outside of an old industrial building. ultra cinematic and realistic. Okay, let's see what we get here. So, uh I'm always going to do I'm just do one output so I don't spend a ton of credits. Quality 16x9 frames to video. So, we'll start with that and we'll see how that goes. While that is processing, we're going to keep trying to get this to work. So, I'm going to now uh get the steps scene. This one and this one face swap the person with the blackly the same. See if that can so silly. Okay, what we got here? actually not bad other than this random dude in the background. I don't know where dogs come from, but this looks pretty good. Um, not bad. So, if you liked this scene, what you would do is you would screenshot this and then we would use that. Or what you can do is click add to scene and it's going to storyboard it out for you. Now I need this other image. Oh my lord, that's so goofy. Make this sizing of the head more realistic and match the reference image better. also where the person is looking. Make it the same as the reference. Not bad. So, I'm straight up going to redo this again. Uh, oftent times it's very random the output you're getting. So, if I hit redo, it might make a

### [20:00](https://www.youtube.com/watch?v=76kDOClvY3U&t=1200s) Segment 5 (20:00 - 25:00)

completely better or worse one, but it's kind of what I figured out I have to do to make the level of quality like I showed you in the first video. So, this part like that looks real. That actually looks pretty real, which is insane. Um, oh my gosh. recreate the scene of Ryan Gosling on the steps, but with this other person's face. Maybe that will work. Let's see if we got a better result here. — [sighs] — Not bad angles here. We could do We could do something with this. So, watch. I'll show you guys. For example, I like how this one ends. So, we'll go to add to scene. Now, I want the aerial shot. So, we're going to click this and extend to or jump to. I'm going to do extend. So, I'm going to say the camera pans up vertically to show the man laying on the steps. He is bleeding out and snow falls around him. Camera should slowly hover vertically while looking down at the man. So that should basically take us from the laying down angle to like a vertical sort of a raise there. Um, that's money. There we go. See, it's just back and forth, guys. That's all this is. And so now we can take this as soon as this downloads and we can recreate this scene. Um, I think what happens here in the movie is he like touches his side and he's like, "Oh, I'm like I'm screwed. " So, we're going to go here while this is loading. And I'm just going to click uh we'll get rid of this. Hopefully, this saves if I click away. I don't want it to like We'll let this generate first. Actually, I don't want to waste this time we've waited generating. Okay. So, let's see. Ah, see this is where prompt instructions do come in because we don't have a reference image for that scene. So, we're going to go back here. We're going to take the new image, frames to video. upload. Where's our new this one? Oh, yeah. Then we're going to use that one. The man touches his side and realizes he is bleeding out. He lays back on the steps while the snow falls on him. We'll say around him. Still. Still camera shot. No. movement ultra cinematic lens and realistic. Okay. And then we can try to get the bleed out one. I don't think we got that to be honest. What we could do is we'll go frames to video. And so we'll use this one and we'll see if it maintains the face. and we'll do the uh aerial shot. The original. Oh, this one's like really skinny. Actually want this as the starting shot. So frame to frame. So now the man touches his side and realizes

### [25:00](https://www.youtube.com/watch?v=76kDOClvY3U&t=1500s) Segment 6 (25:00 - 29:00)

he is bleeding. Camera then shows a vertical drone shot of the man laying in the snow bleeding out motionless frame to frame. So now you guys can kind of start to see this process of like obviously this was not perfect. This usually takes me a couple hours to do. Um, but by providing good visual context, we can, and this is the worst the video model's ever going to be, we can then really make some cool stuff that looks pretty damn real. If I put a grain over this, it's a movie. You won't be able to tell. So let's see how this one does right here. — That's not bad to be honest. [sighs] Little dramatic, but not bad. Not bad at all. So, let's see what this final one looks like. And then if we were to compile all these, throw some music on it, throw a grain on it, um a grain effect, we have a really good fully generated AI clip. Uh, it's going to say, "Oh, it's going to violate our policy. " Let's say not bleeding out. That's probably the reason for it. And we'll just do two at the same time. Sometimes they go through, sometimes they don't go through. Even if I just redo it, sometimes it works. So, we'll see here. This will be the last one. Now, if we were to compile these, especially this one, we can add this to the scene. Let's go to the scene builder. And so now we have sort of this like sequence of of shots here. So after this one, we're going to click extend to. Oh, let me click this one. Extend to uh actually jump to because we want that vertical shot. With jump to I think you can add visual context. First frame, the camera pans to a direct overhead shot looking down at the man laying on the steps. Snow falls peacefully as he looks up to the sky while lying on his back. Should get a pretty good result there because this clip was pretty damn good. This looks like the movie. Not bad. So, let's go back here. Oh, that's the one that we already did. Uh, sometimes you got to refresh. It gets It gets pretty confused. Okay, here we go. — [laughter and gasps] — We did not provide that context and it it's pulling from the internet. That's why these are so good. That's literally the scene from the movie. Uh that's pretty crazy. Wow. That's hilarious. So anyway guys, that is a rough tutorial of how I make these movies. Um, very cool. Now you can use things like Hexfield, Mid Journey. It doesn't really matter. I like Nano Banana. You just got to go back and forth with it a little bit. But like for example, this scene right here, this is money. Um, and if you do a couple more of these scenes, stitch them together, you have a full movie. So if you guys want to learn some more, check out the description. We have an awesome AI community where we're teaching people how to make really good money doing stuff like this. uh a bunch of other ways of making money with AI. So, thank you for watching.

---
*Источник: https://ekstraktznaniy.ru/video/10928*