case, we're going to learn how to create viral videos. I've carefully selected a few proven examples that you can easily replicate to generate millions and millions of views. People are literally using these formats to do this right now. So to start off, we're going to head over to Invido AI and you can either sign up to create an account or you can log in if you already have one. And once inside, you'll see this input where you can add in a prompt. This will trigger NVIDIA's video agent, which is going to make a bunch of creative decisions for us under the hood and is capable of generating complex longer videos for us. So we'll take a look at this later in the video, but for now, since we want to keep things short and sweet, let's head to the agent and models tab and put Sora 2 and VO 3. 1 headto-head to see who comes out on top for a lot of these use cases that we're talking about. So, these are the video models that we're going to be using in this tutorial, but there are more models available on the platform that you can also use if you feel like it. There's also this trend section, and I'm going to be going over that a little later in the video. So, to get started, we'll create a new project, and then give it whatever name, and then we'll be able to start prompting. We can also add attachments in here for the model to reference, select the aspect ratio, which will allow us to generate for vertical video, for social or horizontal for platforms like YouTube. And finally, the most important part is the model selector. In addition to video models, we can select image models to generate with too. We've got models like Nano Banana and Cream. For our first example, we're going to be creating a mesmerizing glass cutting ASMR video. The kind I'm sure like you guys have probably seen a lot of these on social media before, but as you can see, these types of videos often rack up like tens of millions of views and they seem relatively simple to make. So, we're going to be making an ASMR cutting video featuring a glass banana. So, I'm going to select VO 3. 1 fast and then I'm going to paste in my prompt. Use chat to help me generate these prompts. And as you can see, it includes a lot of crucial keywords that are going to make a big difference in the final result. So, you've got phrases like, "A razor sharp stainless steel knife poised just above the banana. " Or, "The first slice breaks through the glass surface cleanly, sending delicate translucent shards scattering around the object. " These kinds of vivid details are going to help to ensure a great generation for us. The default duration is 8 seconds, but you can produce shorter videos if you'd like. We could either generate this as a short for somewhere like Tik Tok or Instagram or horizontal if we wanted to publish a bunch of these onto YouTube as one big longer video. I'll select the horizontal aspect ratio. And finally, we're going to hit generate. Once it's generated, we'll find it over here in the interface. And let's see what we got. So, this is looking exactly like what we're looking at earlier that actually went viral. So, now we can head back into Nvidia and see what that same prompt is going to get us when we use the Sora 2 video model. Yeah, that's not too good. I'm assuming you'll agree with me here that the VO3. 1 video was the clear winner here while Sora 2 is clearly struggling with this, but maybe that was a fluke. So, let's try out another viral video prompt for a video like these crazy what kind of bed would you sleep in videos which are raking in millions of views at the moment. So, here's what we got with VO 3. 1. Not bad. And here is with Sor 2. I'm going to have to give this a VO3 again. But for good measure, let's just run one final test. And we're going to recreate this like which ocean would you rather swim in trend that people are using to get a lot of views. So using V3. 1, we can conveniently set the start frame and the end frame for our video. In other words, we can set the image that we'll see in the first and the last millisecond of the video. This helps us to much more clearly define how and where we want this video to go, which can be far more powerful than just trying to describe in this textonly prompt. So earlier I created these two images here using seedream here inside inv video. And I'll set these images as my start and end frames. Then I'm going to paste in my prompt. And by the way, if you're wondering how I actually came up with this prompt, I uploaded the same start and end images into chatbt and then asked it to construct a prompt that will guide the movement from that first to the last image. When it comes to prompting, there are of course best practices to getting the best outputs out of these models. An ideal prompt should have the S O R A ID or saw rate anatomy which includes firstly the subject. So who or what is going to be in the video? Secondly, the object. What action or movement is happening? Three, the realm. What's the setting or the environment that it's taking place within? Fourth is atmosphere. What's the mood or lighting or the feeling of the space? Fifth is imaging. This is the camera work like the pans and zooms. And finally, the details, which is like the style and the quality. If you make sure to mention all of these, you'll have a much better chance at getting exactly what you want from the AI video you're generating. For example, if you notice how I'm emphasizing the first person view, so you're going to want to think about the perspective your video is being viewed from. We're also directing exactly what happens at each second. So, just like those viral videos, after a few seconds, the point of view descends smoothly beneath the shell covered ocean surface, and then it's going to transition into an otherworldly underwater realm. Now that you have a better idea of how to create a good prompt, let's see what this one comes up with next. Okay, that one came out pretty well. Now, let's see what Sora 2 can come up with. So, notice that when I switch the model over here, the option for the last frame has disappeared. And that's because Sora doesn't have that first and last frame setup options, at least at the time of this recording. But that's okay. We can hit generate and see what it comes up with. Okay, that looks pretty good, but I'm going to have to give it to VO3. 1 again for this one. So, V3. 1 remains the undefeated champ. I think it's just a bit better suited for this kind of ASMR genre. And the control it gives with the first and end frame definitely helps, too. But, we don't want to solely determine which model is best based off some crazy ASMR videos. So, let's put them headto-head again on another viral video angle, which is the AI avatar. You've probably seen these characters on social media like J Monkey and this Yeti Bro. I'm constantly getting these kinds of videos in my feed. Let's see which model can generate better viral characters like this. So, we're going to start with generating an image of a character that we can actually use. And this is where your creativity has to actually come in. So, since I'm a New Zealander, I thought it'd be cool if I could come up with a Kiwi as a mascot for my agency, Morning Side AI. I'm just going to paste in this prompt that I wrote earlier. It's for like a surfer bro Kiwi. And since I wanted to wear a hat with my company's logo on it, I can attach my logo here. And I'll select the Cream image model and generate it as a vertical image first. I think that looks great. So, let's go and take this image and generate some videos and see how VO3 vers Sora is going to stack up. — Kod team, it's your boy Kiwi Kev coming at you from the best little island on Earth. Sun's out, Jandle's on. And I've already lost my surfboard twice, bro. If you ain't wiped out, you ain't trying, mate. All good, bro. Just testing gravity. E, no worries, mate. — Kr team, sun's out, jandle's on. If you ain't wiped out, you ain't trying to, mate. Kiwi Kev saves the day. All right. Well, now like and sub. — So, these are both pretty good. I think we're just going to have to call it a tie, at least for these AI character videos. So, viral video content like this is a good entry point because it's relatively easy to generate as you saw and it's something you can do yourself without relying on a team or on finding clients. You can literally just do what we just did there and come up with a character and start making videos and then be posting it yourself on social media and try to monetize from there. And if you're successful and create a popular enough character and a channel around one of these characters, you could start doing brand deals with them where companies are going to pay for your AI influencer like the guy we just cooked up there. And speaking of branded
content, that takes us to our next business case, which is creating branded content with these AI video models. So instead of creating content, you post directly on your own social channels. There's a growing opportunity for AI creatives to make branded videos for clients in terms of product demos, UGC style testimonials, and air creatives that look like they came from a full production crew. So, for example, here's a quick product demo for O Lollipop that was created entirely inside of Nvidia. So, how do we create videos like this? And which model is best suited for creating this kind of content? Only way to find out is to make some sales and put these models to the test. And since beauty is such a huge niche on social, let's say that I want to create an ad for a client in the space who is launching a new fragrance. Over in Invido, I'll paste in a prompt that I made earlier. And I'm going to add a reference image of a perfume bottle that I generated earlier with Nano Banana. And I'm going to see what comes up using Sora 2. Honestly, I think that came out pretty good. right down to the realism of the spray coming out of the bottle. Sora is known for having that realism grounded in actual physics which allows it to handle these kinds of moments really well. Now, let's see how V3. 1 compares. Again, we switch to the VO3. 1 model and we get the start and end frames that we can set. So, I'm going to add another image of the same perfume bottle as the end frame and see what it comes up with then. That was pretty good, but I think I'm going to have to give that to Saw two on this one. Even without the start and end frames, it ended up handling the task a lot better and really captured the realism of the product with that spray. Since that was pretty close, let's do another one, but this time for a different product. Let's say we're working with a client who's trying to market their energy drink. And they send us a static image of their product, which is just the drink can with this. Let's see what we can generate with Sora 2. That one was pretty good. It's got like the condensation on the can and like the fluid dynamics coming out. Again, Sor is pretty good with the kind of handling the physics of these product ads. But now we can switch over to V3. 1 and see how that does. We're going to add our N frame and then regenerate. Yeah. So, that was okay, but I've got to give to Sora 2 on this one as well. For this category, Sor 2 is pumping out high-quality content that really shows off the product and make it feel like it was filmed with a huge budget and a talented director photography or like a videographer as well. You remember earlier how I mentioned that Nvidia has these trends. So, these are basically pre-built viral templates which we can put our products inside of like when a holiday is coming up and you want to make festive branded posts for your business or your client. These are perfect for that. For example, we could place our energy drink in this jacko'lantern trend. Just note that these trends are using a different video model. I know this is a Sora versus a VO comparison, but I wanted to show you how you can make the most out of the NV video platform. There's tons and tons of features on here, so you can use this stuff for yourself or your current or future clients as well. — They locked the gates, but nothing stops this BT. — Now, let's talk about one of the most in demand services right now, which is UGC or usergenerated content, and how you can start offering them to paying clients. Brands are constantly looking for short form videos that feel authentic and like they came from a real customer or an influencer. But shooting those kinds of videos can be expensive or timeconuming as businesses have to establish relationships with influencers or their past customers. They have to make sure they have the product. They have to set up a paid partnership in the case of influencers and so on just to get these testimonial style videos out there. That's where AI video tools give you a massive advantage in supplying these types of videos to clients. So let's see how store 2 versus 3. 1 stack up for these kinds of videos. So, we're going to paste in a UGC prompt and attach a reference image of the product that we want to promote. Making sure to switch to a vertical aspect ratio, which is ideal for Tik Tok and Instagram. And then we're going to hit generate. — Okay, wait. This perfume actually gets compliments. It's warm, a little dreamy, and it lasts all day. The moment I spray it, people are like, "What is that? " It smells so good. Like, wo! — Perfect. I saw delivered a video that looks handheld, real, and relatable, which is exactly what brands want for social ads these days. Now, let's create a VO 3. 1 version. And I'll set a start and an end frame as we usually do to control the camera motion and use the still from the previous clip to keep it consistent with the same influencer. — Oh, okay. Wait, this perfume actually gets compliments. — So, in this case, both have turned out pretty well, but Sora gives you that raw and influencer style realism, while VO feels a little bit too cinematic and overly polished, at least in my opinion. But ultimately, I think in this case, Sora wins this category as well. So many businesses need these kinds of authentic short form ads for their socials and they need tons and tons of them to test so that the ad machine can do its work and find the winners. You can now deliver these to them as a oneperson AI creative agency. You've got no actors, you've got no shoots, no gear, just you and AI and tools like this. So you could even go and sell these as a monthly package like 20 UGC ads per month, $500 or $1,000 or offer them on a per video basis for like $50 or $100. And with a generation cost of only a few dollars per clip, your margins on this are going to be huge. Now for our third business