All right, so as mentioned this is a live build. Haven't actually done any of it yet. We got a blank in canvas right over here that I'm going to be filling in over the course of the next few minutes with my thoughts with my trials and tribulations. I'm going to include all the detours and any debugging in the video so you guys could see how to actually deal with stuff like that as opposed to just seeing a nice finished product. Here is the idea of the AI Facebook ad spy tool. We're going to start by scraping ads using Facebook. Now, ideally, we're going to be able to do that using a search term. On Facebook, you have what's called an ad library. And so, if you put in a search term like a automation, you get a bunch of ads. The idea is we could scrape these ads and then we could one have a data source for ideas and stuff for ads. But two, we could probably then get into some sort of parasite or repurpose flow, which if you guys watch my videos, you'll know that is one of my favorite things to do. From there, we will categorize. So, I see three types of ads. There's text only ads, there's text and image ads, and then there's text and video ads. Then we're going to have some sort of filtering step probably time since posted. I don't really know maybe likes or whatever. And then we have sort of three options here for filtering these. So you know like if you think about it we have you know to do stuff with text the GPT suite not to mention more or less any other model. We also for images we have like GPT image but for video I don't actually entirely know and this is going to be kind of tough for us to find out. Like I know that Gemini has a built-in video like analysis tool. OpenAI has GPT image or GPT 40 vision and a couple of other ones. And these technically allow you to analyze images. All a video is just a bunch of images. So theoretically we could also just like chop up the video into a bunch of still frames and feed them in. Not entirely sure what we're going to do there, but um yeah, you know, we're going to have like a couple different models. We're going to muck around some APIs. Then we're going to generate a summary of the ad just so that we know uh and then add it to our database. do whatever we want with it. And this section over here, I'm going to leave the door open for stuff like parasite PPC while I do the, you know, scraping of it. Like if we're going to be summarizing it using a anyway, we might as well like write a quick little paraphrase or rewritten portion. So yeah, that's the idea. First thing we got to do is obviously scrape ads using Facebook. The reason why I always start kind of at the end here rather than at the beginning. Like what I'm going to do here is I'm just going to find a service that allows us to scrape ads on the Facebook ad library. I mean I already at the top of my head you know I've worked with enough of this stuff to know that there is a service available but I'm going to start at the end rather than at sort of like the beginning because I want to verify can I do the thing that I actually want to do and I always do this in any build always like backfill. So in our case you know the output the deliverable of the system are the ads and then uh we're just going to jump all the way over there at the very end verify that we can actually get this data through some automated means and then finally I'll dump it into NA and you guys will see just how quick it is to develop this way. Okay, so I was alluding to there being a system that actually allowed us to scrape this already. Yeah, and there is. It's called Appify. Now, I talk about it pretty often in my videos, but it's basically just like a big marketplace of different scrapers. And because it's a marketplace, they put two scrapers next to each other to do the same thing. The people that develop them are incentivized to keep the costs low and reasonable for the uh users. So, that's kind of cool. Capitalism in a nutshell. There's also, you know, Bright Data, Rapid API, and a couple of these other platforms, but I'm just going to use Appify for now. And actually, the inspiration for this system was I found one called Facebook Ad Scraper. So, you know, if you just type Facebook ad scraper here, sorry, Facebook ads library scraper. This one right over here. Oh, sorry, that's not the right one. If you go to pay per result and go Facebook ad library scraper, this one here, it's 75 cents per 1,000 ads scraped, which is extraordinarily cost effective if you think about it. And also, I know this isn't really a purpose this video, but I was also thinking how cool would it be maybe for a future system if you used an ad scraper as a source for like organic content. You know how like most of the time when people try and do parasite flows, they copy in the same niche as the person in the same content format on the same platform. What if you like used ads as the inspiration for like blog content or something? I think it's totally doable. Anyway, so the way that this works is you just feed in the URL of an ad library search. So I'm just going to feed this in here. And then what you do is you just click scrape ad details. And total number of records required, I'm just going to do 20 for now. There's some filters down here that I'll let you guys play around with if you want, but just going to run this puppy. What it's going to do now is using the Appify dashboard going to go out and find us some Facebook ads. And once I've verified that this works in the Appify dashboard, I can then take this data and then move it over to NAN. Okay. So, while this is happening in NAD, why don't we add a manual trigger? Then we're also going to add an HTTP request. The reason why is cuz this HTTP request is just what we're going to use to like do the actual scraping. Okay. And I just jumped back here. It looks like the output is fine. What I want to do really quickly is just export this to a CSV. Just verify, you know, what is this data look like and is it like workable? Is the data in the format that I want. Is it include all the fields that I want and stuff like that. Logically, what do I want? I want a way to get the text of the ad. images of the ad. And then I want a way to get the videos of the ad, right? And holy mother of god, are these long. Holy, look at all these fields, man. What the hell? 90% of these are empty. Okay. Anyway, attention HVAC business owners. Okay, so we have the text here. Snapshot body text. This looks like the main one. Yeah, but some of these are broken. It says product. brand, right? Doesn't look like a lot of them are broken, but Oh, wow. Very interesting. Oh, that's really cool. Look at this person. They're competing with me directly. Never heard of this school before. Let me give them some shout out. What's going on, big dog? Looking cool. I like it. Thank you very much, Cortex Tools. Maybe we should join. take some of their NAN blueprints. Just busting your balls, man. Keep it up. Okay, but anyway, so I'm I have all the data here now. So, how do we actually get this stuff in NAN? What you have to do is an HTTP request. So, this is going to be us working with APIs. If you've never worked with an API before, don't worry. They seem intimidating at first glance. They're not. I'm going to walk you through the personal way that I do. Um, what I'll always do is I'll just type in like the term that I'm looking for. So, Appify in this case, but maybe Panda doc or instantly, and then just API docs. That'll take you to a page that looks like this. What you want to look for is the term authorization or authentication. Sometimes they mix the two up. And here it'll tell you how to get your secret handshake to actually be able to use the service. This is historically the hardest part, especially for beginners to do. So, we just want to solve this immediately. So, it actually tells us exactly that. You find your API token on the integrations page in the Appify console. I'm actually going to click this hyperlink. And voila, we now have the integrations page. And I see there's an new token up here. So I'm going to create one called AI Facebook ad spy tool. Right. Great create. Now once I have this I can copy this and then go back to the documentation. Add the token to your request authorization header as bearer token. So inside of the header of the request we send the key name will be authorization. The value will be bearer token. So authorization bearer xxxx. So in terms of how this actually looks, what we want is we want headers and then we want it to say authorization and then just bear this. Voila. That is our authentication. We are now done with that stuff. But you know we actually don't just want to do just the authentication. Obviously we need to like call something. We need to do something. So how do we rebuild the request that we just sent ampify manually over here? Just do it via API. There's like a little API thing run from CLI. That looks interesting. This is the ID of the actor right here. Is there any like other thing we could do? API endpoints maybe. Run actor. That's cool. Run actor synchronously. Run actor synchronously and get data set items. Nice. This is the one that we want for sure. So if I copy this, what is this? Just the URL. Oh, and we even have our appy token in as a query parameter. That's crazy. Okay. Well, anyway, I mean, I've done this enough to know that there's a variety of different ways you could do this. You could copy this. I actually just had a previous one open here called LinkedIn parasite scraper. So, one thing that I will always do in actual live builds is I just like repurpose the HTTP modules from previous ones. So, this was also an Appify module as you see here. So, I'm actually just going to copy this token and I'm going to run you guys through the way that it looks. So, what we're doing is we are sending a request to Appy using a post request. The URL is right over here. And then over here is the ID of the entry or of the specific actor we want. So, this is what we're going to have to fill in. Over here it says bury your API tokens. This is actually where you just paste in your API token and then there's a section down here that you can update. So on ampify the ID of the actor is always uh this element right up here. So it's in your URL and anything after actors is going to be the ID. Anything after input likewise ID of the input. Let's say instance this would be the ID of the instance, right? House ID, the ID would be right over here. I guess the point I'm making is like just in URL convention, you always have the thing and then you have the ID of said thing. So, anytime that something asks you for an ID, just go to the URL first. Easiest. And that's what we're going to do here to fill that in. And then down over here, it says JSON. So, you can actually get the JSON of any actor just by clicking JSON over here on the right. And I'm just going to paste this in. And now, what I want to do is I just want to rerun the same request and make sure that the request actually works. Saying that authorization is invalid. Why? Oh, because I added two bears. Let me run this again. One thing to do anytime you get an error is not just to throw your hands up at the wall and say, "What the hell's going on? Oh my god, I'm so screwed. " Just take a deep breath. You know, you made a mistake somewhere in 99. 9% of cases. No, the system didn't break. It's most likely your fault at some point along the way. And that's okay. You know everything that you did. So, we just retrace our steps. I knew I made some error with the authentication. So, I went to the authentication section and then I realized that I had screwed it up. Okay, cool. So, we just ran it. Now, we get a bunch of output items. Looks cool to me. We have half of the scraper done already. Why don't we get to a nice green color so we can check this off? So that's done. Now we just need to categorize it. Text, text or image or text or video. So how are we going to do that in anyone? I always do this with a switch node. It's going to drop a switch node down here. And what a switch node allows you to do is allows you to define routes. Right now we have one row called zero. We're going to have another route. It's called one, another row called two. Or maybe actually we'll do video. Then we'll go image. Then down over here we'll go text. And the way that switches work is if you make the most restrictive thing first, the second most restrictive thing second, and the third most restrictive thing third. Like it'll just if it finds a video, it'll go the top route immediately, not even look at the rest. If it finds an image, it'll go the this route. Finds a text, then it'll go down the text route. And this is what we want to do because like a video is going to contain text as well and the text field probably. So it'll go down the video route. Image will go down the image route. Text only text route. It'll be perfect. Trust me. Okay. So how do we actually do this? Let's start by renaming the output to video. Then just add two more routing rules. We'll go image. And down here we'll go text. And now we have to find fields that are specific to videos, images, and text. So I'm just going to command F video in our big Google sheet. And we're going to see if we got anything. We have 3156. I'm just going to go all the way to the very first one. Odds are that's probably the right one. I see snapshot cards. There's actually a lot of videos here. I don't actually know which one's the right one. Maybe I go back here actually and just type like video. Where's the video? There has to be a video somewhere here. Can we go to the schema? Oh, okay. Here we go. So, video HD URL. There's this card section. There's also a video section. I'm thinking I just use the videos. Uh, probably SD URL. I take it because you know like let's just actually just try this first. Yeah. So this is the ad right here, right? Like this is the video ad. And they're just showing an N8 inflow. That's so funny. I just built this NAN automation that does everything. Wow. Damn. Why y'all even watching this video, man? Check out that video. No. Uh don't do that. Um so that's the standard definition. This is obviously the high definition. The standard definition is okay for what we need. So what I want to do, I just want to check to see if this is exists. If this exists, it'll go down the video route. If I zoom out now, you'll see we have three routes. Video, image, text. So we've now sorted out the video route. Let's do the same thing for the image route. So maybe if I type image here. I see cards, but I have a feeling cards is not what I'm actually looking for. I think cards is like some add-on. Okay. Yeah, images. Original image URL. It's probably what I want. So original image URL only want to proceed down this route if it exists. What about text? Uh I guess technically this would just be the fallback, right? Have fallback route. Yeah. So I could actually just remove this probably fallback output extra output. I'm sorry. Extra separate output. What's the extra separate output? Fallback. All right. So, can I rename this? I'd love it if I could rename this. Yeah, we can. Nice. Cool. That makes it really simple. So, of the 22 records that we just fed in, if I pin this and then I run this, am I going to get a distribution across these or am I just stupid? Okay, so six at the top, 12 in the middle, and then four finally, that actually kind of makes sense to me. So, video probably just has videos. Okay. Does the image have the videos? No. And does the text have the videos? No, it doesn't. So, my logic is sound. I'm incredible. Okay. Does video have the image? No. Does image have the image? Yeah. And then does text have the image? Oh, text does have the image. No, it's in the cards. Oh, weird. Okay. Well, anyway, I think that like images means the actual image. I think cards is like some add-on that Facebook sometimes adds to like increase ad diversity or something. I don't know for sure, but that's just the assumption I'm going to roll with. Um, and I like that because now we have three routes. So, that's pretty cool. Uh, what else we want? So, filter. So, we categorize now. Filter by whatever params we care about. Probably time since posted, right? So, we haven't actually done any filtering. Let's just like logically see um filtering. What kind of filtering should we do? What kind of elements do we actually get that we could filter by? Is there like number of likes on the ad or something we could filter by? Cuz like odds are we want to do good ads, right? Okay. Page like count. That's one. Yeah. I'm not seeing anything that we could use to realistically filter. I mean, yeah, I guess we do have a lot of stuff. We could do like the advertising info. We could do lots of things. What should we do to filter? Maybe I'm just going to add a filter in between and we're just going to like leave it for now. So, is there a like count? Okay. Page like count. Advertiser likes looks to be the same. Let's just say this is like greater than zero. I don't know if this is reasonable, but if it's greater than zero, you got more than one like, then we'll move forward. This filter here, I guess, is going to be optional. Why don't we rename it filter for likes? Uh, let's call this run actor. And then let's call this Yeah, I guess switch is fine. So, I just need to run this now. Get the data through here. Looks like we actually discarded one item from an advertiser that had zero likes. So that's good. I guess we'll pin this now. And now we're just going to run the next step, which is analyze text, image ads with open AI, video ads, likely Gemini. How do we actually go ahead and do this? Well, we kind of have our three routes, right? Video, image, text. I'm going to start with the easiest one, which is text. So we'll go GPT or OpenAI. Um, on second thought, if I just fed in all six items in one go to this, this would probably take a long time to run, right? Like you know how sometimes AI can take 5 to 10 seconds, sometimes it takes five and other time it takes 10. Well, think about it. If I test this now, it's going to take 10 * 6 up to like 60 seconds to run. That's really crappy from a testing perspective. I don't really think I want to do that. I think what I want to do instead is I want to loop over these items because by looping over them, it runs one at a time. And by running one at a time, then I will know after the first run is done. So if you've never used the loop over items node, that's what it does. So it takes the six items and then you can set the batch size over here to select the specific number of records you want to continue per run. And then basically what'll happen is you know 22 items, 21 items, six items and then it'll just do one item and it'll go like this, second item, third item, fourth, fifth, sixth. And once it's done with the six, it'll go down this done route. So that way we can like visualize these things finishing one by one. So, I'm actually just going to feed this into OpenAI right away. And then, if you've never connected with this, what you have to do is just go over here and then grab your OpenAI API key. May seem a little scary right now, but it's not. Just head over to create an OpenAI account. Then, what you do is you go to your API keys page, and then I'm not going to open that cuz I've revealed my OpenAI API key way too many times. You just paste it in here. There's no magic here. When you connect it'll be green. And the documentation is probably one of the most like well doumented models, so I'll leave it there. I want to use GPT4. 1 as my model. So, let me do that. And then what I want to say is you're a helpful intelligent advertisement analysis bot. You analyze advertisements. I actually know if that's good, but let's just say, hey, what's up? Let's actually go and test this. Okay, let's execute just this. It's going to run everything. And then we're going to get, "Hello, I'm here to help you analyze advertisements. " Cool. That's fine. Um, now we can actually go and run through the prompt. So, if you think about it, like what am I going to do here? There are a couple things. So, we want to like analyze the prompt and break it down and spy on competitors with stuff, but like, you know, if we're going to do this anyway, we might as well create some assets as a result of this, right? Like, we might as well go through that additional work and create some sort of thing. Like, we could just spin the ad. We could flip it. We could uh repurpose it or rewrite it or we could do something. So, that's what I want to do. If you think about it, like we now have GPT image. So GBT image could literally recreate an image to pretty good quality. Uh VO the new video model could literally recreate a video, not the whole thing, but like to reasonable quality, I would say. So not only am I going to like analyze the ad and break it down and like tell us how to remake it, I'm also going to add an additional column in my database and I'll try and regenerate the asset. So in the future if we want to like flip the ad or whatever, it'll be really easy for us to do. So, what I'm going to say is you are a uh your task is to take as input a scraped JavaScript object. I'm just going to feed the whole thing in. I'll show you guys what that looks like. A scraped JavaScript object from an advertisement in the Facebook ad library and then summarize it plus spin, repurpose, rewrite the advertisement, the ad copy. output your results in JSON in this JSON format and then uh let's do two. So the first is going to be summary and then the second is going to be rewritten ad copy. You're doing this for strategic intelligence. We run an advertising agency and we're always looking at what our competitor advertisers are doing. Okay, output your response in this JSON format and then we'll get some rules. I am just going to copy rules from this file again. You know, this is a build that I did just the other day and uh let me show you guys what I would do. I always like having like some writing style stuff and I've taken a lot of this from a guy called Gu who wrote a big manual of style. Very intelligent fellow. He's been prompting uh GBT since I think GBT1 or GBT2. So, you know, six or seven years now. Probably one of the foremost experts on planet Earth on how to do this stuff. And uh yeah, he's created his own little manual of style that outputs pretty high quality. And what I've done is I've taken heavy inspiration from that, you know, modified it a little bit. And I just use that myself anytime I get a writing. So, the big thing I think is classic style of western writing where the level of formality should be inverse to the topic's novelty. He talks about epigraphs, typographical stunts, and experiments etc. So, this is what I'm going to do as my rule. And then I'm just going to say make it a very no rhetorical questions and then make your summary extremely comprehensive and analytical. Okay, let's do that. Now, now what I'm going to do is I'm just going to feed in this whole thing. How the hell do I feed in this whole thing? It's easy. All you do is you go expression. Check this out. JSON. JSON string. What does this mean? I feed in the entire JSON string. Is this the most token efficient? No. But is it the easiest? Yeah. AI is great at this stuff. So, we can now output content as JSON. Change the output randomness to like 0. 7. Then we can execute this. And you know what's going to happen? It's going to feed in this entire thing into AI and it's going to spit us out some sort of rewritten version of it plus a summary. Okay. So, now we get all the data. Age 25 to 44. We get video tutorial. Oh, it's a video tutorial. What if it's a video tutorial? Why are we going on the text route, man? Are we going down the text route? Let's see here. Yes, we are going down the text route. I guess that's kind of odd. Like we have some videos in there, huh? Maybe instead of the original filter, we check to see if the file contains the term video- ho. Because if it contains video- ho, it's probably being hosted somewhere. That probably makes more sense. So maybe we look to see video-ho like this. And then if the string contains video-ho, it's a video, you know. So why don't we do this? actually just go JSON to JSON string again. And we're going to say string contains this. And if it contains this somewhere in this string, odds are it's hosting a video. Meaning we'll have to go down the video route. I think that makes more sense, right? If not, we'll check the image. Uh, and is there like an image version of this? S content. I think it's S content. I'm not sure for sure, but like screw it. Let's try S content. That probably makes more sense to me because we don't want to like, you know, filter this wrong, right? I think I'm just going to ask content ho. And if not, we'll just do text. I actually don't know how many of these are really going to be text, though. Let's just try executing this again. Let's see. Yeah, like we didn't get any non ones. To be honest, like they probably all contain images or videos of some kind. So, why don't we just run this back and then now why don't we do like 150? No, let's do 200. Let's unpin this and let's just see if we can get anything that's text only cuz maybe they don't even have text. Do people even run text only ads anymore? I don't know. Okay, what do we get? eight and then 186. So, no, there's nothing in the text route that clearly doesn't work as a switch. Um, I think the reason why is because scontent- ho Oh, I get it. Cuz somewhere in the JSON, they include the profile pick of the person, right? So, obviously every thing is going to have a profile pick. All right. Is there anything else that we could do? H original image URL. Okay, this might have to be it then. We might just have to like get the the original image cuz I don't know if I run this now. And if we just um hardcode those, what are we getting? 7320. But no video though. Why no video? Huh? Weird. Does Do these not contain video ho now? Oh, okay. This is actually an ID. My bad. Uh, we just do the video, actually. Yeah, video. Okay, let's check this out now. Nice. Okay. 42 7379. These are now texts, right? We do have cards. Cards include some images, but this doesn't it's not an image ad. Okay, great. Cool. Figured it out. So, now let's just do the text route. loop over items, open AAI node. Uh, what do we have to do? I guess we have to actually feed in the OpenAI node here, right? Okay. And we probably should also make a database. And for the database, let's just call this Facebook ad library analyzer database here. We'll go ads. And then what sort of data do we actually want to like? So, we're going to generate it and then we're going to have to pump it into a Google sheet, right? I'm just kind of getting ahead of myself here, but credentials for Google Sheets, by the way, just create new credentials. Sign in with Google. That's pretty straightforward. What I want is I want that Facebook ad library analyzer DB. And then I want the sheet ads. Okay, cool. I just wanted to verify this would work. We're not finding any columns cuz I haven't added any. Um, but that's okay. We'll just stick that up. And then I'm going to go option shift T. Just going to do some reorganizing here so it looks a little cleaner. I know that it's kind of messy right now cuz we don't have multiple routes on the switch, but that's fine. Loop of items is batch size one. And then now what we have to do is we just have to run this basically even though there's no data in here. Why? Because um we need access to the previous nodes. And now we do have access to it. Do you guys see? So now we actually have all these variables. So now we're good. We can actually go Oh, sorry. We actually already have that. My bad. We just fed the whole prompt in, right? Which is cool. And now that we have access to that, we can actually dump this in here. So what data do we actually need? We want the summary and the rewritten ad copyright. So, we should get some other data too for sure. We got to get the ad ID. So, add archive ID. I don't know what the hell that means, but it looks important. Page ID. We should get the type, right? So, it should be like text, image, or video for sure. Some We should also get the date added. Um, I'm seeing everything here is using this convention called snake case where you do underscores. So, making sure that's cool. Uh, what else? Page name. Is this organized nicely? I don't know. A body text. Um, I don't know if that's actually what we want. Why don't we go summary? Rewritten ad copy. And then what do we also go? Well, I guess that's it. Summary and then rewritten ad copy. Yeah, I guess that's all we need really. Maybe we should have like an image prompt too and then like a video prompt. Screw it. Like we might as well, right? Like we now have tools available that add these things for us that can like generate things and we might as well just make our system as future proofed as possible. So now that we have this, what we need to do is we need to feed in all of this data. But the thing is we need to feed it in from the loop over items node. Now what's really unfortunate about the loop over items node is a lot of the time you just don't have access to that data in the NAND like schema view. So go over to JSON, then manually select loop over items, and then you'll be able to see it. Little hack. It's just like some bug that they have with this. U but this is a quick and easy way to solve this. So I'm just going to drag over page ID type. Uh this is going to be text date added. That's just going to be an expression. That's now can actually add the current date. Page name is going to be here. Summary. Summary is going to be back in the open AI node. This one right here. rewritten ad copy. It's going to be right over here. Image prompt. Uh we're not going to have any because we have no image, right? And yeah, that looks pretty good to me. And then once we're done, I'm thinking we should probably add a weight node. Let's add a weight node of 1 second as well. I don't know why this is so ugly. Let's just um do a little reset and then cool. Now let's execute workflow. Now, this should loop one item at a time, meaning we can actually see the input to this. We could can do much better debugging. And because, you know, we have variable time here. Anytime you guys are working with a variable time automation, you just always want to have some sort of weight. And you also always want to be able to visualize it. Sorry. Anytime you're working with the Google Sheets, you have a weight, but anytime you're working with like a variable run API like AI, when they run inference, you're just going to want to like do it one at a time so you can visualize or like two or three at a time or whatever. That's fine. But anyway, cool. So, we're now like scraping just all of these pages. And then, you know, we're also grabbing some other information that we want. So, we get the type, page ID. That's cool. Um, page name. I see that being valuable. [ __ ] you know, we should have gotten the page URL. Actually, maybe we should add that in there. I mean, like, what's the purpose of the page name? Put on the URL, you know? Like, if I'm using this, I want to just like be able to click on it really quickly. Okay, so let's just stop this and then go back here. What's really cool in endn is when you add a new column, you just refresh it and then you can like add just the one that you want. So page URL has to be somewhere here, right? Page profile URI. There you go. That looks good. Cool. So now we're going to fill in the URL as well for an ad thing. This looks pretty good. Image prompt, video prompt. Uh let's actually run through how to do that. So this is like the one route. If you think about it, we're going to want another route for image. I'm just going to duplicate that. And then that's going to feed into loop over items. Then we're also going to want another prompt for video. Now, I don't actually know how the video is going to work. As I mentioned, we have a couple different options here, right? We could do video ads with Gemini. We could do something else. I don't really know. But this is more or less what it's going to look like. So, let's just start from there. So, OpenAI message a model. Okay, let's go here first and then let's go image or OpenAI image. Yeah, analyze image. There we go. So, we actually have an endpoint literally built for us, which is incredible. So, we're going to analyze the image. Now, the question is which image should we analyze? There's a lot of images that we could potentially analyze, right? So, which one should we There's a lot of different models we could use. I'm just going to use GBT40 to do like the image analysis. And then, yeah, what's in this image? Be extremely comprehensive. Okay. And now we just need to feed in an image URL. We don't have any input data here. Why? Cuz um we haven't actually run this route. So, I'm just going to leave it. So, this is the only connected thing. Then I'm going to just going to run this right now. Intentionally give it an error. That way it'll stop right here. So it should just immediately try analyzing. Cool. And now we have access to all of the things here. So if it's going down the image route, it's going to have the image URL, right? So we can just feed in original image URL right here. And now we will be able to analyze it. What we want to do is we want to feed in the image prompt plus the rest of the ad. So what I'm going to do is I'll say um JSON scrape. Then I'll say image description as well. And then I'm going to feed in the output of this. And I don't actually know what the output is. So can't really say much there. But um let's just give it a go. Actually have it output something. It's annoying that, you know, we have to hit the end point every time, but it is what it is. Let's see how it does. What does this image actually look like? Attention HVAC owners. Get 20 plus installs in your calendar. All right, cool. So, nice. We got the content now. So, we can actually go in here. Oops. Let's stop this. here now. If you think about it, just add an image description. feed that into the rest of this. And you know, there's one more thing we have to do. Actually, this is currently grabbing JSON, which is feeding in from the previous node, but we actually went from a loop over items. So, we need to change this now. Um, we need to know dollar sign a loop over items dot item, I think. json item. json. There we go. Dot to JSON string. There we go. All right, cool. Should all be the same now. Wonderful. Now, once we're done with that, we can then grab all of that data and then feed it in. And we're also going to want one more thing. We're going to want an image prompt. So, what we can do for the image prompt is we can actually just output, you know, output this directly into this, right? Because we are outputting an image prompt, which is great. We're not outputting like an image prompt per se, but this is as good as an image prompt. It's extraordinarily high quality, right? And then um yeah, we'll just do the same thing for video prop. At least that's my idea. And then yeah, we have the exact same flow. So why don't we now test this on one full run? Oh, sorry. There's one more thing we have to change. You know where it says type? We got to go text. There we go. Let's see if that gets dumped. Probably not because I think it instantiates it with the JSON at runtime, not JSON afterwards. Yeah, it didn't. should dump that. Now, make that smaller as well. Let's try running this one more time. Just run into some saving issues here. We should now actually have this feeding in with image. Cool. Now we have image. Wonderful. Turn that off. So, what have we done? Uh well, if you think about our automation now does, you know, 2/3 of what we want. We have the text route down here. We have the image route over here. Now, we just need the video route, right? So, we're now done with the text and image. We just need video ads. How the hell are we going to do the video ads? Good question. I don't know. Not entirely sure. We are going to look at the Gemini docs, which they are ranking for now. Interesting. So, I'm going to get an API key, and we're just going to get this puppy up. Get API key, please. Is this it? Do I have my API key? Wow, that's easy. Okay. Yeah. Create API key, please. Uh, permission denied. H. Oh, my bad. Um, let's go. Google Assistant. Copy. Okay. So, once I have this, what do I do? I copy this. Now, I guess I go back here and I need to make an HTTP request to Gemini. So, import the curl. This is where the Gemini API key would go. So, where's that API key, please? Do I not have the API key? Oh, yep. Copy that. Paste it in. I just run this now. Will this work? Will it work is the question. Oh, wow. Yeah, it did work. Nice. Cool. Cool. All right. So, that's that. How do I do the rest of this? I want to do a video now. Video understanding. There we go. This is what uh people were telling me about. Upload a video file using the file API before making a request to generate content. Use this method for files larger than 20 megabytes. Videos longer than approximately 1 minute. We can also pass inline video data. Use this method for smaller files and shorter durations. What are we working with here? Right? Like what kind of videos are we working with? How big is this? 22 seconds. This one's 363 kilobytes. So we could just feed that in probably. That' probably be fine. What about this one? Save that. Big is that? Yeah, it's over a megabyte. So, okay. There's no way to like 100% know for sure if it's going to be over or under. So, we should just always upload the file. I don't like that I can't do this through curl request. Not request, but that like I'm not seeing a simple and easy way to do it through curl request. Yeah, it looks like they cut it into frames, so it's like one frame per second. Why do I not have all of this stuff through? They just really not like people doing things through curl. No, we got it. It's rest, right? So, what about back over here where I was doing rest? Oh, sorry. I'm silly. We do actually have the curl request right here. Okay. So, this is just a script that like, you know. Okay. Yeah. So, this is what we do. We curl I don't know what the hell temporary header file is. This is really annoying. Why can't we just do it all plain text? Oh, I get it. It's actually showing us everything. So, this is the request that we will use to generate the summary. This is the request we will use to upload the video. But why is it three steps? What I'm going to do is just upload the video data. So, upload URL. Where's upload URL? Temporary header file. Okay. You know, is this just somewhere in NAN? Maybe I'll just go upload video Gemini NAN. Okay, let's take a peek at these threads. Okay, we got a bunch of other YouTube videos here that show people how to do this. I mean, I could just read through the API, but I would like to just have it directly posted. So, who is this? Let's see here. Oh, nice, dude. Scarface. Very cool. Okay, so sending file to AI for processing. Let's see here. So, we are uploading here. And then the key, this is whatever the guy's own API key is. What's going here? Mime type video MP4 content length. Do I have to have the exact content length? H, how do we map it though? Where is the file? Generate upload URL. Oh, we upload it to Google Drive and then we uh do the stuff after. Okay, this seems pretty annoying, but u I'm just going to import the curl anyway. Seems fine. API key needs to change. Uh oh. Let's just paste that new API key in. Okay, this looks good. So, header content length. So, jeez, what the heck's the content length? Body content type. Is that it? The nodes input data should contain a binary file, but none was. Really? Are they asking me to feed the entire thing in right now? Weird. Oops. Huh? I didn't get anything back. Okay, that's it right there. Okay, so what do I do next? Uh well if you think about it what I want to do is I need to upload this file to Google Drive first. So upload a file. In order to do that authorization you need to connect a client ID and a client secret. This is a little bit more involved but it's still possible. You just have to open the docs and walk through the steps to configure a cloud console. Now what I'm going to do here cuz not going down the video or the image route uh image or text route. So I'm actually going to feed this video, you know, directly in. So actually, let me just execute this workflow. Get the videos up here. So here I'm going to pin these so I don't have to do that anymore. And the videos I'm just going to feed in directly here. And we should do through URL. Can we do through URL? Oh jeez, we can't do through URL, can we? Yeah, URL. So we actually do one more thing. We got to download the file first. HTTP request first. Okay. When you have a video, you can just download it directly by passing the URL in. So video SD URL, feed that in over here. This is now going to download that file. Okay, once we've downloaded the file, we're saying we're getting an invalid URL. Why are we getting an invalid URL? Strange. Should be good, I think. Is there really an invalid URL or is my end just bugging out? Just a get. That's it. Strange. This seems like a very simple error. Okay. How about if I hardcode this? I hard code this. Still invalid. Okay. So, we actually got a bunch of data. Why? 7 8 9. Oh, don't tell me it ran 40. Oh, it ran 42 times, right? Yes. Yes, it ran 42 times. Yes. That is why there was an issue. It ran 42 times. Okay, cool. So, don't run this 42 times. The reason why there was an issue there is because I was silly. I got ahead of myself and I did not put everything in the loop over items node. So, let's just get that in there. First of all, when you do this, it'll only run once, right? So let's now loop. And now we should be able to run this without, you know, um sending 42 HTTP requests simultaneously. Okay, cool. So now that that's done, we can actually grab the video URL directly from this body. We should now be able to execute this, make another HTTP request. This should not break. Okay, so now we have it. Now that we have the binary data, what we can do is we can Do we even need to upload it to Google Drive? We have all the data here, right? We have the file size. Can we feed the file size somehow? We may need to upload this to a Google Drive actually cuz we may get the um size of it afterwards. So when we try executing this step now should run this upload to my Google Drive. The output from this should now be a bunch of data. including the ID of the file, but then more importantly some bytes or something quota bytes used. I think that's probably what we want to use. And now we go over here to this other HTTP request which will now upload it. There is a upload header content length maybe content size maybe. Uh I'm just looking for like a size variable here. Size. There we go. That's actually perfect. So size goes here. And then everything else should be good for our beautiful body. Uh be a beautiful body. Let me just move all this stuff out of the way because it is kind of getting in the way now. And then I'll just loop this back here for now. And then make this a little prettier. We are going up the video row, right? Yes, we are. Okay. So, now what we should do is we should uh we should be starting a request all the way to the top of the Gemini video analysis stack. And we are okay. And now what we do is I'm just going to pin these so we don't do that again. Now what we do is we actually upload the file. So now I'm just going to grab all this curl. And I'm going to go back here, make another HTTP request, import this curl. This is the upload URL. So we can actually just feed this in directly. content length, number of bytes. Uh, I think that's just going to be from the Google Drive size, I think. I don't know for sure, but I think it's going to be bytes. Upload offset zero. Upload finalize. And then body parameters, audio path. I have no idea what the heck that means. No idea. Not really sure what that's supposed to refer to. Is it just an audio file? Is that why? Yeah, it looks like it's just an MP3. MIME type. What's mime type? Think we need to set this. We didn't already. Okay. Well, anyway, um now what do I do? Uh I guess I just feed the binary, right? In binary file and we do it from Whoa, this is a lot of stuff. How do we actually get the binary from here over here? I'm kind of silly, but download video. Upload video to drive. This is begin upload session. Upload. Man, this is why I prefer the open AI API. You guys couldn't tell. There's a lot less BS that I have to do in order to make it work. Well, upload video, but yeah, I don't actually know how do I do the actual video uploading because I need to feed in this input data. Does this need to be directly behind me or can I just download this video as binary? Uh, I don't know. Let's just go. Let's go. Download video. binary. Right. I think uh this is not the input data field name. So definitely H do we have to download it again? Maybe. My goodness. This is uh going to be quite the superfluous system if I just end up downloading the same file that I just uploaded. Be completely and utterly pointless. Let's say n reference data binary from earlier nodes. There has to be a way to do this. I mean, you know, that solves my problem, right? Just call data. But seems ugly to me. Seems really ugly to me. Do I care about ugly? Uh, kind of. Oh jeez, we did upload the video. Oh no, no. Bad request. Check your parameters. Why don't you check your parameters, man? Upload has already been terminated. What are we talking about here? So, did the first one work? Oh, yeah. First one works. Wow. Jeez. Wow. Nice, nice. Okay, cool. Uh, we did feed the API key in, right? Oh, did we not? We didn't feed in an API key. What are you crazy, guys? We got to feed in the API key all the time. query parameters is key. So key then we paste in the key key. Duh. All right. Okay. Uh that's fine, I guess. So why don't we just try doing this again, but like manually, you know? Well, not manually, but not with the pin data. We'll actually run this. That's fine. That looks fine. Now we actually have uh done the upload. Okay. So, now that we've done this, can we actually ask it a faking question, please? Ladies and gentlemen, can we ask it a question? For the love of God, let's just feed in one final HTTP request. Import the curl. Apparently, it's in an unsupported format. Why? I don't know. I have no idea. Maybe we got to get this in here. Nice. We need the API key from the previous node. Let's go back here. Copy that in. Now, paste this in. Body parameters look very ugly. Don't know why they look that ugly, but they do look very ugly. Oh, now they look good. Okay, mime type was uh let's say describe this video in detail. File data mime type was video/mpp4. File URI I think was where's the file URI? Right over here. Just open this up. See if there's anything else. M type actually we can just feed it in from here, right? Yeah, I mean this looks good to me. So once we're done with that now, we should be able to pin everything. Then just run this scribe this deeply. I'm fully expecting an error here completely and utterly unknown name contents cannot find field. So we need to body parameters value. Oh, right. And I'm just feeding this here. And I should be feeding it here. So now if I run this will this work. That is a major issue that I had. Executing. Cool. Yeah. Video done. Here's a detailed description of the video. It's an ad for an agent to help find new leads for your business. Opens with a calendar and text that reads, "If you have a goal to get 10 new customers this week, then promotes an AA agent. " And so on and so forth. Cool. So now we actually have the, you know, like the video analysis. Now this sucks. You know, the fact that we have to do this many calls sucks. And I'm not really sure if we have to do this Google Drive upload download step. I just um I'm like really silly today and I don't know if we could just map from a node that's like three or four nodes back. I mean this to say that like you don't actually need to be an expert with NAND or any other platform to make money with it. You know, like technically I hacked together a solution here that I'm sure is not ideal and I'm sure there's some way to do this and uh you could still package together crappy janky solutions like this and make money, you know. So, not telling you I have to do this. And I'm sure I'm going to have like 10 million NAD people in the comments being like, "What the hell? You don't do it that way. " But whatever. I'm lazy. I'm going to leave it at that. The question is, you know, now that we're done with this, now that we have the description, what do we do? Well, we just feed it in to basically this flow. So, all I'm going to do is I'm going to delete this. And then, you know, this section of the flow here, we're going to delete this. Feed this in. The reason why I'm copying this one is because this is the one uh that has a very similar logic. It's just instead of the image, we're feeding in the video. So, you're helpful intelligent advertisement analysis bot. You analyze advertisements. Okay. Scrape JavaScript object. Cool. Awesome. All right. JSON scrape. Now, we'll just do video description. Then, if you think about it, what are we feeding in now? Instead of feeding in that, we're feeding in this. So, let me feed this in. Cool. Now, once we're done with this, we're then going to go to Google Sheets. All of this should be the same. Oh, no. It's not going to be the same. This is a loop of items, too. So we do have to update this. Okay. Two. This one has to be two. This also And this one here has to be two. But then summary rewritten ad copy and image prompt. What we're going to do is we're going to feed in uh the output from this HTTP request into here. So it's actually going to be this. It's going to be Oh, right. This isn't the right string either. So we should probably fix that. Uh it's going to be this right over here. And I know we can't technically map it right now. And uh that's fine. What I'm going to do here is under video prompt, I'll feed this expression in. And then just instead of it being JSON, what we want to do is dollar sign open AI 2. No, HTTP request. There we go. Candidate zero content. part zero text, right? Oh, no. Sorry, sorry. We do the open AI, right? If you think about it, we have to Yeah, it's this one here. Then what are we doing? We're feeding in that output. So I feel like the output is just going to be like this. I got a little bit confused there. But the output's going to be this. Just going to say open AI 2. Was it? I think so. This is going to be empty. This is going to be the rewritten ad copy. Summary is going to be the same. Page URL type video. Okay. Everything else here looks fine. So I'm just going to loop this all the way back around now. And now I'm going to test our final route. And then after this, I'm just going to rename everything. We should go down this route now really quickly. Blazing fast. HTTP request worked again. We got the detailed description of the video. Now OpenAI is starting. Okay. And then Google Sheets. Okay. Cool. And now we're waiting. Okay. Now we're just cycling over and over again. Now this is just going to do the same thing obviously. Okay. If we go back to our sheet here, let's just make this a little bit more visible. page name, type, text, image, video. Okay, so now we have the video ad. Page URL is here. Write these first four. We didn't have the page URL, so I'm actually just going to get rid of them so my sheet is nice and pretty. We're going to get the summary. rewritten ad copy. And then we're also going to have the image prompt and the video prompt. Okay, so let's see whether we got what we were looking for. This ad from Enzo is a multipplacement campaign promoting their AIdriven automation tool specifically to for commercial real estate agents. centers on the pain point of lowball offers and unqualified leads, promising to streamline prospecting, improved follow-up. So, we actually get like very detailed deep analysis here, which is nice. Very cool. Then we have a rewritten version of the ad. Then we have no image prompt here. Uh, looks like we didn't fill a video prompt for this one. Why not is the question. So, we should go back. We should figure that out. What was the issue with the video prompt? Oh, yeah. Yeah, my bad. So, what we want to do is we actually didn't want to feed this in here. I'm silly. Okay. So, we'll feed that in and that should work. Let's run through this. Okay. HTTP request OpenAI. And let's just see if you know we get the ad properly this time. All right. Here's a description of the video. Cool. Cool. All right. That seems pretty reasonable. We should try making this even more detailed. Simply describe the video is not enough. In excruciating detail, do not output anything but the description of the video. Okay, if I save that now. And now what we have going to overwrite and save this again is we now have our whole three route system. We have videos up top. We have images over here in the middle which I'm going to feed in. And down at the very bottom we have text. Now that I've done this, I'm going to select all and I will restructure it so it's a little bit simpler, a little bit easier. Space it out. And I'm also now going to go through and I'm going to rename these nodes. Why? Because if I don't, let's say I come back to this thing in like a month, I'm going to have no idea what the hell's going on. I'm not going to understand why I made these decisions or whatever. Likewise, if I'm, I don't know, selling the system to somebody, I'm not going to know what the heck's going on. So, run actor is fine, but maybe we should say run add library scraper actor. Let's do that. That probably makes more sense, right? Can we get this on one line? That seems fine. Filter for legs. Switch. This first is just going to be loop over text. This one here will be loop over image ads. And this last one will be loop over video ads. So now this is a lot more straightforward to me. First thing is we're going to download the video up here. Upload the video to drive. Begin an upload session in Gemini. Let's do that. Begin Gemini upload session Google Drive. It's going to be I don't know reddownload video. This is going to be upload video to Gemini. Again, I'm only doing this cuz I feel I have to. This last one here is going to be analyze video with Gemini. Over here, this is going to be summarize and output video summary. It's going to be add to video. add as type video and we'll just wait. Okay, so that's the level of detail I'm going to go for. Analyze image. Over here, we'll just say output image summary. This is going to be add as type image. Down over here, we will output text summary. This over here is going to be add as type text. Awesome. This looks pretty good to me. I'm now just going to add a little bit of documentation here at the end. The value in adding documentation is not just because it makes my little template look sexier. Again, it's for maintainability reasons. Then also, one thing that people don't talk about is when you package automations like this, um when you add a level of documentation in them, you can actually charge more for it because you're also including some training, you can also add an additional line item in the system that you're selling to people and say includes documentation. And then what you can do is you can record a video kind of like this. Now, this is obviously really in-depth of me building the whole thing. You can record a video after that just goes through exactly how it works and various uh places you can change variables and whatnot. And you can also add a section at the very beginning here and you can call the section I don't know like variables use an edit uh fields node or set variables node and then you can actually just like draw from that for all future things you know in case that's not clear what I mean is you could use this to set the prompt here and maybe the prompt over here or something and then you have a variable called like open AI prompt then you have another one called Gemini prompt you have one called API keys I don't know but like the point that I'm making is we're just simplifying things for the end user and in doing so we add value to the system I don't know Why this is a little bit off? Do you guys kind of see how this is not like perfectly lined up? Weird. Okay. So, what I'm going to do now is I'm just going to go and create a note, which I believe is just note. There we go. And then what we need to do, you know, AI Facebook ad spy tool steps. We'll say add API key to run ad library scraper node. Add filtering threshold in filter for likes node. Uh, what else do we need to change realistically? Add Gemini API key to begin Gemini upload session and upload video to Gemini and analyze video with Gemini nodes. Adjust prompts to AI as needed and swap in your Google sheet to the Google in the Google Sheets notes. Happy building. Cool. That looks pretty good to me. What we can do now is we can stretch this out. And I don't know why this is over here. There you go. That looks pretty cool. And now, you know, there's like significantly more context for this. You can imagine how like if you did eventually give this to a client or something, they would have a little bit more of that context and be able to, I don't know, just not demand more for the system. I found it funny people use the term demand. Demand more for my services, but like ask for more for sure. It looks like the default like stretch out thing has stretched it out a fair amount. So, I'm just going to make this a little bit more compact here. We could definitely squeeze that in more more. This is the simplest route. Should definitely look like the simplest route as well. And now what I'm going to do is I'm actually just going to run this. I'm going to run this on live and new data. And hopefully my Gemini API key or whatever doesn't break, but we'll see. Okay, so we're going to feed in a bunch of items. Let me click execute workflow. Going to run that ad library scraper. Okay, so now I'm doing an end to-end testing. I'm realizing in the download video node I put JSON snapshot cards. So that is not the right one. Let's check if there is videos. That is the right one here. Okay, so that's good. That's why we always do an N10 test. I'm just going to pin these now to run through top to bottom and see if there's fewer. Oh, actually, you know what? I don't like that we're keeping 94 items cuz I actually want to cycle through this pretty quick. So, I'm going to do I don't know. Let's run it as greater than 100 and let's see how many we get. 54. Okay, let's do greater than 500. Execute this and let's see. 40. Okay, let's do greater than, I don't know, 10,000. I just want like five items basically. something like that. Wow, there are a lot of very high quality advertisers here. 100,000 29 1 million. Interesting. Just kept one. So 500,000 should give us significantly more. Okay, never mind. Uh 200,000 should give us a few. There's clearly one big dog over here. Okay. Anyway, this is getting kind of annoying. So maybe we'll just go 100,000. And we're going to get like 13 or 29 items or whatever. Then we're going to pin that. And now we'll feed that into the rest of our flow. We're going to execute it. And let's see what are the big dog advertisers doing. Looks like only images. No videos in this which is kind of Oh, uh we had one image route actually. I think one item. Yeah, it looks like it. Okay, we're just doing our image route. That image route uh looks fine. We're going to test that end to end. Okay, now we're just running through. Let's just go back to our Google sheet. Corvado hexaware image right over here. That's cool. Okay. I mean, like, you know, I just tested the video route before. Verify what the issue was there. Now we're testing the image route. should verify that see if there are any images here. I'm not seeing any issues, so that's cool. Um, so maybe I will actually go back and then make my filter less restricted. We'll go 10,000. Execute this step. Unpin the subsequent nodes. That's going to output 35 items. That's really not that many either. Let's just go back to a,000. And then maybe up like 100. Okay. And then I'm going to pin this. And now when I execute this workflow, these should be distributing videos and everything else. Yeah. Okay. So we're going two items up the top right here. So this is perfect. Let's actually test this and let's just delete everything now. So we can go new. Looks like we got a bad request here. Now why did we get the bad request? The file is not in an active state and usage is not allowed. H what is occurring here? So, it looks like we fed in a name. Because it's not in an active state, usage is not allowed. So, I imagine this could be a couple things. First thing that jumps to mind is we uploaded the video and we immediately try analyzing the video. We might need just like a second wait in between the two um to like give the database enough time to like function properly. Um, I've had this happen with a number of other APIs. So, I'm just going to add this weight here. Then, I'm going to rerun this. I'm going to see if maybe this would work. So, that's going to be the very first test. If not, there might be something that we uh screwed up with specifically in the upload video to Gemini node. So, we're going to see what happens here. So, beginning Gemini upload session that worked fine. Uploading the video to Gemini. That looks like it worked fine. 2 seconds analyzing video. No, I'm getting the file is not in an active state and usage is not allowed. So, why is this not in an active state? Let me just search this quickly and let's see what we got. Doesn't look like this person found the issue. We need to check its status and wait until it becomes active. Okay. So, it looks like there is some wait time that we need. If I just change So, okay. Anyway, it's good to know that like I'm on the right track here. It's kind of annoying that we have to wait. If I just wait like 10 seconds and then I run this cuz you know me, I like my hacky solutions. If I just cart blanch, we wait 10 whole seconds and then we just do everything again. Is that going to work? Like if I can get this to work once with the 10 seconds. Maybe we could do a call back URL or something. We'll see. We're waiting 10 seconds. Yeah. So 10 seconds is sufficient. 2 seconds was not. Is there any way we could set a call back URL on the analyze video with Gemini thing? Is there a no call back? H. Let's double check. All right. Well, let's see how this worked. Okay. Well, looks like it worked on the first run. Second run, we had an issue. Invalid URL. Why? What are we doing here? H. Is there no videos SD URL? There is a video SD URL, but we have an issue with the URL apparently on the most recent run. Or maybe we didn't actually feed this in. Oh, yeah. We don't actually have a video over here. Why is that? Oh, that is annoyingly loud. Let me turn this down so it doesn't happen again. Okay, so it looks like it's not getting anything in snapshot videos. So that must mean, you know, if it fed in, then something in our filter here was wrong. I think it was cuz we doing a different sort of logic here, right? Yeah. So, what we need to do is probably go back to videos and then instead of original image URL, it's like SD. Let me just see what does this do again? Video SD URL, right? And then this needs to exist cuz if it doesn't exist, let me just actually drag that in and see if Yeah. Okay. Cuz if this doesn't exist, then we're not getting anything. And then this is not going to be able to download. All right. Let's try this again. Execute this workflow. Okay. We're now waiting the 10 seconds because 10 seconds is just a flat wait. Not ideal. So, we should probably implement some sort of error handling if this doesn't work. Maybe wait a little while. And you know what? We can actually just feed this in directly to the HTTP request. I'm being silly. There's logic over here that retries on fail. Instead of 10,00 seconds, we're just going to do 15 seconds. And then max tries three. So now this is going to rerun three times. And then over here in wait, we'll just have like a 15-second wait first. And then everything else after that will be automatically rerun. So this should eliminate I want to say 90% of issues right here. And then this will eliminate the other 95% of issues or whatnot. So that makes me happy. pretty confident that we're always going to have a working system as a result of this. Maybe always is a little bit of a misnomer, but still pretty good all things considered. Cool, cool. Let's go to our Google sheet. How's this looking? Type video JavaScwiting the ad copy dynamically for us. And then we also have a video prompt. Nice. Wow, this is extraordinarily detailed. Nice, nice. Okay, wonderful. All right, we'll leave it there. But uh yeah, that's the system in a nutshell. you know, verified that this now works on image prompts, on video prompts, and now on text prompts as well. Pretty stoked about this. I think that this sort of system can do quite a lot for businesses that are in PBC and whatnot. So, we're going to see how all this stuff goes. Obviously, uh I'm going to be releasing this template, so you guys are going to have access to it. And yeah, we'll leave it there. And that's