🔥 Join Maker School & get customer #1 guaranteed: https://skool.com/makerschool/about
📚 Watch my NEW 2026 Claude Code course: https://www.youtube.com/watch?v=QoQBzR1NIqI
⤵️ All templates & assets below
n8n Foundations Templates:
https://leftclicker.gumroad.com/l/ycdjfk
Asset-based Lead Generation Template:
https://leftclicker.gumroad.com/l/oejie
Deep Multiline Personalized Icebreaker Email Generator Template:
https://leftclicker.gumroad.com/l/jdvzm
AI Proposal Generator Template:
https://leftclicker.gumroad.com/l/bzvofk
Website AI Agent Template:
https://leftclicker.gumroad.com/l/qlopyw
Content Repurposing System Excalidraw:
https://excalidraw.com/#json=9ddFtqBBftO438qZ1eGUh,C-i2AvXXUKr3kTrkvEZ3Bg
LinkedIn Personalized Outreach System Template:
https://leftclicker.gumroad.com/l/vufgbx
📚 Free multi-hour courses
→ Claude Code (4hr full course): https://www.youtube.com/watch?v=QoQBzR1NIqI
→ Vibe Coding w/ Antigravity (6hr full course): https://www.youtube.com/watch?v=gcuR_-rzlDw
→ Agentic Workflows (6hr full course): https://www.youtube.com/watch?v=MxyRjL7NG18
→ N8N (6hr full course, 890K+ views): https://www.youtube.com/watch?v=2GZ2SNXWK-c
Summary ⤵️
A comprehensive n8n masterclass covering everything from core foundations to advanced AI automations including lead generation, proposal creation, content repurposing, and personalized outreach designed to help you build and sell powerful n8n workflows.
My software, tools, & deals (some give me kickbacks—thank you!)
🚀 Instantly: https://link.nicksaraev.com/instantly-short
📧 Anymailfinder: https://link.nicksaraev.com/amf-short
🤖 Apify: https://console.apify.com/sign-up (30% off with code 30NICKSARAEV)
🧑🏽💻 n8n: https://n8n.partnerlinks.io/h372ujv8cw80
📈 Rize: https://link.nicksaraev.com/rize-short (25% off with promo code NICK)
Follow me on other platforms 😈
📸 Instagram: https://www.instagram.com/nick_saraev
🕊️ Twitter/X: https://twitter.com/nicksaraev
🤙 Blog: https://nicksaraev.com/
Why watch?
If this is your first view—hi, I’m Nick! TLDR: I spent six years building automated businesses with [Make.com](http://make.com/) (most notably 1SecondCopy, a content company that hit 7 figures). Today a lot of people talk about automation, but I’ve noticed that very few have practical, real world success making money with it. So this channel is me chiming in and showing you what *real* systems that make *real* revenue look like.
Hopefully I can help you improve your business, and in doing so, the rest of your life 🙏
Like, subscribe, and leave me a comment if you have a specific request! Thanks.
Chapters
00:00:00 Introduction
00:02:02 n8n Foundations
02:00:33 Asset-based AI Lead Generation
02:29:34 AI Custom Proposal Generator
03:21:37 Website AI Agent
03:37:17 Social Media Content Repurposing Engine
05:02:55 YouTube Video Trend Detector
06:39:20 LinkedIn Customized Outreach System
07:50:20 Deep Personalization Icebreaker Generator
08:20:54 Outro
Оглавление (10 сегментов)
Introduction
Hey, welcome to the most comprehensive NAD building master class. This is over eight hours of pure live automation builds and my goal is to take you from a pure beginner and turn you into somebody who can build professionalgrade AI workflows that a business is willing to pay thousands of dollars for. This is your first time here. I'm Nick. I scaled my own AI automation agency to 72K a month. And I now run the biggest paid AI automation community with around 3,000 AI automation freelancers and AI agency owners. Most of them land their first paying client within 2 to 3 weeks of joining thanks to our daily accountability program and our proven frameworks. So, I don't just want this to be another cookie cutter NAND tutorial. There are millions of those. My goal instead is for this to be a comprehensive guide to real building. In particular, I'm going to be creating eight high value NADN workflows that you guys can start selling immediately. What that means, I'm not just going to show you a pretty finished product. In many cases, I'm actually going to build this entire thing alongside you live as if I have no idea what I'm putting together, and I'm going to do it while narrating my thought process out loud cuz I think it's much more instructive for people to see what a real build process looks like versus just the shiny finished product at the end. So, we're going to cover everything from the absolute foundations here to advanced systems that can command $3,000 to $10,000 per implementation. I've also added timestamps for every section in the description, meaning you guys can just jump around to what you need most and then come back for the rest. And also, just make sure to bookmark this video so you can reference the workflows again and again as you build and scale your automation company. Okay, whether you're looking to automate your own business or start an automation agency or maybe just add some highv value skills to your freelancing toolkit, welcome to the complete nadn building masterass. Before we start, this master class is all about live builds. The purpose is to show you, as I mentioned, what a real live build process looks like. And then I want you to internalize the parts of automation that you can't really learn by reading, only by watching somebody who knows what they're doing do it. So to make sure everybody's on the same page, this first section is going to include an introductory course on the foundations of NAD. I'm going to include content on simple logic, workflows, some basic nodes, and maybe some more. I have this elsewhere on my channel, so if you guys have already seen all of this stuff, you can jump right to the next section. The key here is to build a solid foundation that will support everything else we do later. Because when we start building those $5,000 plus automation systems, you do need to understand some core fundamentals inside and out before you can get to the good stuff. Let's now
n8n Foundations
dive into the NAN foundations that are going to make everything else possible. So, if you guys have seen some of the other videos on my channel, you'll know that I put a very big emphasis on building things practically. I don't really care much for the academic side of things. I prefer we just dive right in and then teach some of these concepts by actually putting nodes together, making workflows that you could sell for business purposes or implement into your own companies. And that's what we're going to be doing today. We can't really get away from doing some of the academic stuff with JSON just because, you know, if you don't know JavaScript object notation, you are going to have to learn some things like types and objects and what variables are and stuff like that. But for the most part, we're going to be learning all of these concepts down here by building two workflows. The first workflow is up here. And essentially what this does is it feeds in a bunch of lead data to artificial intelligence. This is the Google sheet containing four leads with a bunch of information here. So we're going to feed in this information to AI. Then what we're going to do is we're going to have AI generate a subject line for an email, an icebreaker elevator pitch for an email, a call to action, and then a post script little PS sign. The idea here is this is a real workflow that people pay me for. And so these are the ones that I want to start with. Essentially, this will allow you to customize email outreach that it seems as if you've done a lot of research into the person, uh, which is very, very valuable in a business setting. The second workflow is a little bit more peculiar, I guess. There's this service out there called source of sources. It used to be called helper reporter at Haro. Basically, the way that it works is this lovely gentleman here, Peter, will send you uh a bunch of information basically where journalists are looking for professionals in a certain industry to weigh in on some developments. And then if you're a professional in the industry and you give the journalist some good info, they can actually tag you and then like use you in an article, it's a quick and easy way to basically get listed in a magazine or some very authoritative data source. And what this system does is it basically gets an email like this. Then it pumps in the titles into AI, does some cool processing, and then we actually write a draft of that email as if we were an expert in that field, which all you need to do is just like quickly review, give a once over, edit a little bit, and then send off to journalists. There's another system that I've sold a number of times. And so I want the videos that I create on NAND to be practical in nature. I want them to be on things that you're probably going to be using for business purposes. So these are the two systems that uh we're going to be building. They're cool systems and all, but you know, for the purpose of this video, I kind of want to build them from scratch. So why don't we just exit out of that puppy? That's the prompt that we're going to be using for AI. And for the rest of this, we'll just get rid of that. Okay. The very first concept I want to cover are fixed fields and expression fields. I'm going to be basically convincing you to just use expressions all the time. And I'm also going to show you how to map different field inputs because the last time that we jumped around Nadn, we built a couple of workflows, but I was sort of glancing over at some of the nuance behind fields and stuff like that. So, for the purposes of demonstration, it's going to be pretty easy. We're just going to build the first system out by clicking a button that is going to get a bunch of data from our Google sheet and then we're just going to pass all of that data in kind of rowby row into AI. Pretty simple, pretty straightforward, but ultimately something that is very useful and you'll find yourself doing quite a bit if you do cold outreach. So, first things first, I'm just going to press tab. That's going to open up this trigger thing on the lefth hand side. And then, you know, if you type trigger here, there'll be if you type trig, I should say, you'll see a ton of options here. Um, you could either do that or you could just scroll down to the bottom where it says add another trigger and then press trigger manually. Either is fine, but for the purpose of this demo, I'm just going to do that. And basically what I want to do is the second that I run this trigger, when I click on test workflow, um, I want to, uh, get all of the rows and all the data in my Google sheet. So, I'm going to go up here to nodes. I'm just going to type in sheet. We're going to get Google Sheets. Now, what I'm going to do is if you scroll down here, you'll see that there's this one node called get rows and sheets. That's what I'm going to click on. Now, I've already done a connection before. Um, what you're going to have to do if you want to connect to your Google Sheets account is go to create new connection. Now, because I'm on the cloud hosted offering, and keep in mind if you're not on the cloud hosted offering, um, these sorts of connections are a little bit more difficult. You have to go to Google Cloud Console and get set up there. But because I'm on the cloud hosted offering, all I need to do is click sign in with Google over here. And I'll actually go connect to my email account and then it'll create a credential for me, which is pretty handy. So you can see it's saying it's already has some access just because I've already done this connection. But for the purpose of this demo, I just wanted to show you guys what that looks like. And then I'm just going to save this connection here with a name so that it's just nice and organized. Okay. And what we want to do is we just want to grab this Google sheet up here, right? So, in NADM, there are a variety of ways to do this, but I'm just going to I'm going to be looking for a document. Sorry, a sheet within a document. My bad. The document we're going to want to select is from the list. And we're actually going to go down. We're going to choose this one called leads, comma space, January 27th, 2025. So, as you can see, we've already manually found that. Okay. And then the specific subshet that we want is this sheet one because I guess that's the only one that we have here. So, I'm just going to click on this. It's going to actually do an API call to Google Sheets. It's going to find that there's only sheet one here. Then I can give it a click and then voila. Let me just run test step and let's see what happens when I click test. Okay, great. So, I've offiscated this data. This data is not um actually one for one whoever this person is. I've gone through and I've like renamed them and stuff like that just for privacy purposes. But as we see on the right hand side here, we have a bunch of output. And we can see the output in a variety of ways. Tabular, JSON, schema. most people in and they like the schema look because it just kind of compresses the information nicely. It's a little bit easier for them to see. I'm a little zoomed in here too. Um but uh yeah, you know, normally like you see most of the node variables which is pretty handy. But this JSON one over here, this is really intimidating for a lot of people. And so we're going to cover this in detail. I'm going to show you exactly how you read JSON, what all those things mean. But just I just want you guys to know that for the remainder of this course, I'm going to be using primarily the JSON and the schema view. But I'm actually going to tend towards JSON. And the reason why I'm going to be tending towards Jason is because like JSON says all the same stuff that the schema view does anyway. But unfortunately part of the way you learn JSON is just by kind of staring at it a lot and squinting at it and kind of inherently and intuitively understanding the formatting. If we're going to be looking at outputs all day anyway, we might as well kind of get, you know, kill two birds with one stone. All of the same data in JSON is represented in schema anyway. It's just instead of like the quotes around key names and stuff, you just have sort of this light gray box alongside like a type sign here. So, I guess the point I'm making is we might as well double up and just learn how JSON looks while we're proceeding with the course. Um, and that's why I'm going to be using this. Even if it looks a little bit more intimidating, don't worry too much about it. Okay. So, I said that I'd talk about fields, right? So, fields in N& M just as I was covering on the previous video are stuff like this, right? We have this sort of center node config option here for our Google Sheets node. Um, you know, one of the fields we selected was this YouTube credential to connect with. Another one was this resource sheet within document operation document sheet. But I want you to know that these are actually all representable in code as well. So as you see over here, we have two different types of fields. One's called fixed and the other is called expression. So by default, just to keep your life easy and to not like freak you the hell out, especially if you're like a newbie and an um they're going to keep all the fields to the fixed type. But if you click on expression, you'll see that things are now a little bit different. Notice how when I went to fixed, we had like a nice little, you know, it said um leads January 27, 2025. And then when I jump over to expression, now we have some big long ID. So what is this? Why is it structured that way? And what exactly does any of this mean? Well, if we pay close attention, this ID field here, 1 lowerase o t r r r r r a t 4 C capital C and then the rest of this big long ID string over here. If we go to our Google sheet, what you'll see is that ID string actually matches the URL of our Google sheet exactly. This takes me to a wider point. Most of the time, sorry about that. anytime you're accessing a resource on some API or even just on the internet, they will store the ID of the resource, which is sort of like a hidden representation of it, in the URL. So, you know, one of the examples that I provided the other day was I went over to ClickUp, right? And inside of ClickUp, I like searched around for some record. I'm just going to click on this here. This is this was my old content calendar. Um, how to send 1,000 cold Instagram DMs per day. That was one of the things I wanted to do. This right up here is the ID of the record. 86B27A7Zm, right? If I wanted to do something with this through their API, this is the ID of the record that I would be calling. So, I want you to know that like ClickUp, uh, Monday, uh, like even Gmail, basically every service out there, they will store the ID of the thing you want to modify or update or whatever just in the URL. So if ever a field asks for an ID, you can almost always just go to the URL of the thing on the actual user um you know on the actual app like the user interface, then you can find that URL thing. You just hardcode it in here. Okay, so that's just a brief look at some of the differences between fixed and expression. Basically fix a lot of the time just to keep you guys um to make sure that like we're on the same page here. Fix is just the simpler version. Then expression is sort of what's actually going on under the hood. So, you know, um, NAN just defaults to fix like it just did here because it doesn't really want to scare you away. But in order to really unlock the value of NAN, we have to go to the expression, um, field. And I'm going to show you here why, you know, I basically just use expression for everything at this point. Okay. So, looking at the output here, what we have is we have we clicked and then we got a click event that was counted as one item. And so, N8 actually shows you the number of items that are passed on. Then we pumped that click event into Google Sheets and then that click event outputed four items. So now we actually we're working off of array data or tabular data which I'll cover um in a moment. But essentially with these four items now what we want to do is we want to pass each of these items into artificial intelligence and we want to have AI tell us something about it before um writing some cold email copy for us to insert into an email or maybe we could just send a Gmail directly or something. So what I'm going to do is I'm going to click on this button. I zoom out a little bit. What we want is we want um if you go down advanced AI, we want is we want this open AI node. And specifically what I want is I want the message a model. So if you've seen me connect this in the previous video, you'll notice that you know in order to create a new credential, you actually have to go and you have to find the API key from OpenAI in order to do this. This is pretty simple to do. You just type platform. openai. com/ I think it's like account/appi or something. You'll have to click on this button here to open the documentation to tell you more. But anyway, you can basically just create an API key for Nadn. Um, it's very simple and very straightforward to do so. And I've already done this and I've called it YouTube. So, I'm just going to use that credential just so I don't have to like leak another API key. Now, again, we have a ton. We have a ton of fields. We have resource text operation message and model from list choose. Keep in mind the fields are again fixed, right? It's trying to make it really easy for us to select GPT40. So I'm just going to go down here, type GPT and four. And then I'm just going to click 4 O. And now we actually enter in the text that we're interested in. Okay. And this is really where you're going to start learning the differences between the fixed and the expression. So fixed again, fixed is just I mean it's what the name implies. It's fixed. It's text. You can't make this dynamic. You can't add variables to it. It's the simplest way to get up and running with a node, which is why nad will default to fixed. Um, but over the course of the next few minutes, I'm going to convince you to basically always just use expression. Okay. So, I saved my prompt somewhere else. The first thing I'm going to do is I'm going to add a system prompt. So, I'm going to go down here to system. I'll just say you are a helpful intelligent writing assistant. Usually, the way that you will do um AI calls is you will have a system prompt first. Then you'll have a user prompt after. And the user prompt is where you actually give it the instructions you want it to do. So, you know, in our case, it's going to be like, hey, I want you to write a bunch of fields that are templates that we're going to insert into a cold email later. And then after you have the choice to provide a bunch of examples. So you could provide an assistant prompt and then you could do another user prompt. Assistant prompt. You can do that however many times you want just to show it how things work. For the purpose of this example, I'm just going to be providing a single user prompt. And what I'm going to be doing here is I'm just going to copy over my prompt below. Let me paste that in. And let's just read through this together. Your task is to personalize an email. You'll do this by taking as input a prospect LinkedIn profile. Then editing five templates for different sections of the email. Subject line, icebreaker, elevator pitch, call to action, and a PS or postcript field. If you're unfamiliar with postcript, you know, at the bottom of an email, it'll just say PS, I really miss you. Can't wait to see you. That's what a postcript field is. We basically want AI to automate that for us because there's a lot of value in making those postcript fields seem human written. Anyway, now we're offering it some templates. subject line. Hey, name I think I have something for you. Re and then cool thing about them that we discovered. Let's just go unique thing about them or their company. Icebreaker, I know you're doing thing and I've been following related thing for a while, so I figured it made sense to chat. Next elevator pitch the TLDDR. I think I can add 5K a month to their paraphrase business with a few automated systems. And then there's a call to action. I just did this for a very similar industry company and we had 28,350 in a few months. They do related things. So I'm very confident I can duplicate this at minimum. Would be 100% risk-f free. I guarantee at least 20 appointments booked or you wouldn't have to pay. Pretty neat, huh? So then I give it a bunch of guidelines. And feel free to pause the video if you want to take a look at it. The last thing that I do is then I say respond in JSON using this format. And if you've at all used AI before, you'll see this JSON thing come up again. JSON. JSON. We're going to cover that in just a few minutes, so buckle up. Okay, great. So, we've given it a ton of uh instructions in the first user prompt. So, what I do next is I just give it a user prompt with the actual body of the input that I want to give. And if you think back here um you know what this Google sheet is just so you guys um are all on the same page as me is I've basically gone and I've scraped a bunch of data about random people on the internet that fulfill some criteria that I have. So, chief executive officer, director of demand generation, director of marketing, business development, manager, some dentistry person or something. Okay. And then I have a bunch of fields here. One of the fields I have is I have a summary field where people basically write their own summary of who they are and what they care about. We have a ton of other fields as well. We have company location. We have a description of their title. We have um I don't know their industry. And the cool part about AI is you could just feed this into a large language model and have it automate something for you. Have it write something customized. And so that's what we're going to be doing here. The thing is though, right, how do I get dynamic data into this? So, I don't know, let's say um one of the things I want is I want to feed the AI the person's full name. Notice how this is fixed here, right? If I just typed Amy Wabby, then that means that every time I do an API call that includes their full name, I'm going to have it say Amy Wabby. This is fixed. It's the same thing every time. If you want to make this dynamic, what you have to do, there are variety of ways to do this, but I'm going to use the expression field, is you have to click expression and then you drag the field that you want and then you drop it. And you'll see that when I do that, we've now just inserted a little bit of code. This is in N8N's um code format, the equivalent of the variable that we just pulled from our pinned data or our data from the uh input. And what you see down here is this is separated into two halves. The top half is the code representation. We said full name just in like regular characters. And I can manipulate this how I want. Then a colon, then a space. And then there's uh curly bracket space dollar sign Js N. FU L N A M E space. And then right curly bracket right curly bracket. Up here you have the code representation. And then notice that underneath here we have result. It actually shows us what the data that we're pulling in is from the input, which was right over here. Now, I'm going to be feeding it a bunch of data in order to have this personalized. I'm going to be feeding in their full name. the summary. title. But I just want you guys to notice how these variables change between the full name, between the summary, between the title. And you're going to notice that there's kind of a pattern there. Okay. So, full name was that we'll go title. I'm just going to drag this in. Paste it in. Notice how the first one was jso nf full name. Second one was jso n. title. Right. Let's see what the third one is. Let's go down here to company. If I drag and drop this, it now says JSON. Company. You know that the C is capitalized. That looks a little bit new, but for the most part, it's still pretty self-explanatory. It seems to me if I were an alien staring at this and looking to try and figure out what the pattern here is. It seems to me that every single time I drag and drop one of these fields in there, it says dollar sign jso n dot and then the name of the variable. And variable tends to be whatever I'm looking at on the left hand side here. So what if hypothetically instead of me doing this drag and drop what if I were to actually just try and write this myself? Well, let's see what happens. If I zoom in a little bit, just so we could all see. If I go curly bracket curly bracket, you'll see that I'm now entering sort of the next level up in NN. I'm now manipulating like code or JavaScript, their version of JavaScript, the JMSE path I believe it's called, directly in the expression editor. And this is where N gets really powerful because you also have a ton of built-in methods and built-in ways you can manipulate this data with literally one click, one little button tap without having to drag and drop all these modules everywhere. You can just do so in the convenience of your own field editor. Okay, so the very first thing that pops up is it says suggested JSON, if I just type that and then I press enter, you'll see that now I have access to all of the fields that I had access to earlier. So instead of me dragging and dropping all this stuff, what if I just wanted to write the word summary here to grab this. If I just type summary, notice how this now turned green. And we've added all of that information down here to the results tab. All that information is here. How cool is that? So now, you know, if I want to continue on, I'll go industry. I'll go dollar sign JSON industry. Voila. Next, we're going to go company location. I'm going to go um JSON. co company location. Voila. Notice how it's trying to autofill this for me, right? I'll go title description, dollar sign JSON dot to ital description. Voila. And I basically have the ability to do this um infinitely depending on how nested the data is in the JSON structure of the input. I I'll run through how to do all of that um in a moment, but I just want you guys to sort of pattern match look from the outside in. How am I actually referencing all these variables from previous calls? Okay, great. So, to me, you know, as somebody that does this sort of personalization all the time, if we click on this little button here, we can actually open up we can see all of the um code and all of the text. To me, somebody that does this all the time, this looks like sufficient amount of information for us to personalize an email off of. So, I'm actually just going to call it there, and we're actually going to just run this puppy. But basically what's going to happen is we're just going to be feeding in all of this stuff to artificial intelligence and we're going to be saying, "Hey man, based off of all of Amy's info, I want you to tell me something about her and then I want you to write an email um based off of the template that I provided you earlier. " The last thing I'm going to do is I'm going to go down here and press output content as Jason. Give this a click. And then I'm not going to modify any of the options here either. Um but I'm just going to click test step. Okay. Okay, so what just happened or what is occurring as we speak is I'm feeding in four items to open AI. I'm basically Blitz going item one, item two, item three, item four, and it's happening all at once before they show us the output of each of these. So that's why it takes a little bit longer than usual. Um, but this is more or less what's happening under the hood. Okay, great. And we just received an output. Um, so what I'm going to do is I'm just going to zoom out a little bit just so we could see this in completeness. And I'll use schema for now just to make it easier for you guys. You don't have to like scroll all the way to the right to see it, but let's take a look. Um, the output was content, subject line, icebreaker, elevator pitch, call to action, PS. So, it actually went it outputed five fields for us and we can use those fields in future nodes very easily. The first thing in the subject line was, "Hey Amy, think I have something for you regarding boosting online strategies. " The icebreaker was, "I know you're leading creative web solutions and net directives and been following innovative marketing strategies for a while, so I figured it made sense to chat. " The TLDDR, I think you can add 5K a month to your client focused internet marketing efforts with a few automated systems. The call to action, I just did this for a very similar IT consulting company. We hit this amount. They do e-commerce and marketing, too. So, I'm very confident I can duplicate this at minimum. I'd guarantee at least 20 appointments booked. You wouldn't pay. P. S., even if we just chat, I'd love to hear about what you're doing with video marketing. That sounds pretty cool to me, right? If I were to receive an email like this, you know, aside from the subject line, which is a little bit vague, boosting online strategies, but you can't fault the AI for not being perfect 100% of the time. Um, you know, even if I were to get something like this, it would seem as if, uh, you know, like the person that's reaching out to me did their research at minimum and is reaching out to me sort of in a personalized customized way as opposed to just like blasting me a big sequence. So, uh, what else do I want to do with this? Well, if you guys, you know, remember back to the beginning of the video, the there weren't just two or three nodes here. there was um there was a note that updated the Google sheet. So, I'm going to show you how to update the Google sheet. And what we're actually going to do is we're going to take this one step further and I'm actually going to draft some emails to send to Amy and the rest of the people here. So, actually the first thing I'm going to do is I'm just going to go over here and pin this output. The reason why is because if you think about it, me calling um OpenAI there, that was a little bit like computationally expensive. It took me a little bit of time. I don't actually want to have to rerun that over and over again. pinning the data just allows me to capture that, cache it, and now I can just like test all subsequent nodes using that, which is very straightforward. Okay. So, what I want to do, I want to update this Google sheet. So, I'm going to click here, search nodes. I'll type sheets. And what I want to do is I want to um update row and sheet. I'm going to select my credentials again, YouTube. The resource will be sheet within document. The operation will be update row. Let's instead of using the fix, let's actually just use the expression so I could show you how this works. Just going to grab the ID of this field or of this sheet. Paste that in there. Voila. Then the sheet that I'm going to be picking, it's just going to be sheet one. So notice that the top here we use the expression field view and then down here we just use the fix. So it actually use the expression field view to do an API call to their backend to discover that sheet one was the only sheet and then we selected it there. We can also just feed in an expression. And as you see when you go from fix to expression for the specific sheet type, it just says G equals Z. G equals Z just refers to the first sheet. um basically here. So we're always going to be selecting the first sheet. But anyway, for this I'm actually going to go from list and then I'll just go sheet one. Keep this simpler. Okay. Now it's going to be fetching a bunch of columns for us. And basically in order to do this update, what we have to do is um we have to grab the data from the previous nodes and then we have to update every single column here with that data. If I were just to, you know, scroll all the way down and update the columns that I care about like subject, icebreaker, elevator, pitch, call to action, postcript, it would just leave the rest of these blank, which is kind of annoying if I'm being honest. But I don't want the rest of these to be blank and these to be filled. I want all of this data because I'm just going to import this into some cold email tool later, right? So, what we have to do is now, this is kind of the initial idea behind this system. I'm going to go through and I'm going to update every single one of these um using the expression tab and then I'm going to be pulling data in. But I'm not in from here. I'm one node behind it. So you guys could see what it looks like in code basically. Okay. So first thing we have to do is we just need a column to match on. In order for the automation to know which row should be updated, we have to find data that includes that email. So I'm going to go email. Then the first thing I'm going to do is I'm going to scroll down here to the previous node, not the OpenAI node, but the Google Sheets node, the one that like first listed us the data. I'm going to drag this feed that in there. And you'll notice that the format now looks different than it did before. Previously we had a dollar sign JSON dot item. Right now we have a dollar sign and then a bracket single quote sign the name of the node another single quote sign and then another bracket and then we go do item. json. e. So when you access node data from more than one node back you have to use this new format here which is kind of annoying but you'll see how easy it is when we just like kind of copy paste and spam our way through. So, uh, how about this civility? I'm going to go to expression. I'm going to paste this in. Then, instead of email, I'm just going to go civility. How about this first name? Let's paste that in there. We'll go first name. How about this first name suggestion? Go expression. Go first name suggestion. How about the last name? I'm going to paste this in there. Go last name. How about the full name? I'm going to paste that in there. Go expression. We'll go full name. How about the title? I'll go title, profile URL. So, you can see, you know, we're picking up the pace a little bit. It's getting a little bit faster and faster. Company, illegal name, company phone. And I'm just going to go ahead and um cut to me having actually filled all this stuff out just for brevity. Okay. Elevator pitch, call to action. Looking good. And then last but not least, we'll do um I think it was just PS or was it postcript? Yeah, sorry. It was just PS for that. This is the one situation which uh the column name is a little bit different from the variable name down here. Okay. Now that we're done with that, let's just quickly cover um just some little differences here in the formatting just so we can get a runup on the JSON which I'm about to teach you. You scroll all the way up here. You'll see that some of these or actually most of these followed this format. It was oops sorry about that. Let's go over here. It was um curly brace dollar sign the name of the node in single quotes dot item. json dot whatever the value was dots summary. In this case this one was regular company URL. This one down here wasvm ID. You'll notice that a few of these are actually a little bit different. A few of these in particular the variable names with spaces. They weren't just dot, you know, the name. It was square bracket single quote and then the name of the variable that we're referencing. The reason why we had to do this instead of just doing the dot and then the variable name is because JSON in JavaScript object notation, you can't query I mean the technical term is you can't query a key that has spaces basically just because spaces aren't really represented. So, you know, if we scroll down here to some of these variable names, first space name space suggestion, there's a bunch of spaces in there, right? So, the way that you get around this in uh Nad's formatting is instead of calling, let me get that specific example, company legal name or was it uh first name suggestion. So, the way that you get around this is you can't just go first name suggestion cuz that kind of breaks the formatting here, right? The way you can get around this is two things. one, you can make it so that the input data aka the columns in your Google sheet are all just one word. So if instead it was first name suggestion, this would be fine. You could also do something like first name suggestion. Some people do that format. I don't really like that. I don't know why. What I do is um called camelc case. It's kind of like a programming convention. And here we can get into like a lifelong mutually assured destructive battle where some people prefer camel case, other people prefer underlining. Um, but I'm team camel case. So, go team camel case. Um, instead, you know, if we want to represent the spaces, what we have to do is we have to go brackets here. Um, single colon first name suggestion, another col uh another quote sign, and then another square bracket. So, don't sweat the small little formatting stuff too much. I just wanted to give you guys like the best way that I found to learn this is literally just to like spam a bunch of examples. Um, that's what I did when I was picking this stuff up. I didn't read a bunch of books on JavaScript object notation or expressions or whatever. I just spammed a bunch of examples. And the human brain is such that if you squint at it long enough, you'll sort of figure it out intuitively. Uh, okay, great. So, now that we've mapped all these, let's actually go and let's, um, let's update this data, right? That's the whole point of this. So, uh, now that we've mapped all the data, if I click test step, then I go over here to my Google sheet, scroll all the way to the right, you see, and it just took us a second, but you see that we just, boom, we just updated all four of these simultaneously. Uh, we got the subject line for Amy, subject line for Joe, subject line for Mercedes, subject line for Susan. Same thing with all the elevator pitches, call to actions, postcripts. You'll see that the the copy of the email is pretty similar, but um it changes. So, this one's, I know you're leading creative web solutions. I know you're le into leveraging customer voices. I know you're focused on building relationships. I know you're advancing digital dentistry. These are all basically like reframing or paraphrasing um the things that they said in their profile which ultimately uh you know is meant to make them go like oh okay this person did their research they read a little bit about me. So that's pretty cool. Um why don't we take this one step further now. Why don't we pin this and then I'm actually going to create a Gmail draft in my inbox just so we can see what's going on here. So I'm type draft create a draft. I'll connect with my Gmail credential and this is before I did the naming convention so it was probably number three. We're just going to create a draft. The subject line is going to be, let's just go to fix now that we know how to do this. It's going to be JSON dot uh subject right here. And then the actual message. It's going to be pretty interesting, but I'll show you how we put it together. First thing we're going to do is we're going to go JSON dot. And then what I want to do is I want an icebreaker. Uh, sorry, I don't want an icebreaker. I want to go hi. And then I want to grab the person's name. So I'll go Jason first name right here. So hi Amy, then we have the JSON icebreaker. Next up, I want to go JSON dot uh sorry, dollar sign JSON dot and then what were we doing here? Was it the elevator pitch? Yeah, it was the elevator pitch. Then we want to go dollar sign JSON dot what's the next one? Call to action. Beautiful. And we also want to go dollar sign JSON dot and I'm sure you guys can guess what the last one is, but very quickly make sure you do make sure you know u postcript. Now if we open up this thing in the bigger example window, you'll see the email says, "Hi Amy, I know you've been leading creative web solutions and net directives. Been following innovative marketing strategies for a while, so I figured it made sense to chat. The TLDDR, I think I can add 5K a month to your client focused internet marketing efforts with a few automated systems. I just did this for and even if we just chat, I'd love to hear about what you're doing with video marketing. " I guess I actually need to add a PS sign here. And it looks like none of these have periods. Actually, I think this last one has a period, but not all of these do. So, just because I was a little bit um fast in kind of designing this and I didn't put periods over, I'm actually just going to add the periods directly into the expression editor. I'm going go PS here. Okay. And now, if we open it up, this is what it looks like. We got periods everywhere. Cool. Wonderful. And then we have a little PS sign here. Even if we just chat, I'd love to hear what we were doing with video marketing. Very, very cool. Awesome. So, now that we have that pinned data, um, why don't we just draft these emails? So, I'm going to, uh, create a draft. I'm not actually going to send this cuz I don't just want to spam a bunch of these people. I've also changed the email addresses and stuff, so I'd get a bunch of bounces. We're just going to test step. It's executing. We just executed four nodes. So, now if I go over here, go to drafts, you'll see here that we have, "Hey, Susan, think I have something for you. Red digital dentistry. " Oh, gez, I'm realizing I didn't put the um email address in actually. Yeah, I did not put the email address in. I just did the draft. U in options, you have to go to email. And then what we want to do is we just want to drag that back here. JSON email to get that address. Let's create four more drafts just for shits and gigs. Why don't I just delete the ones that I just generated uh right over here. Going to just discard these. And then you see that the four that I just did now have just popped up. And they also have the email addresses there. So yeah, that's that with that example. Um, I think at this point you guys probably have an intuitive understanding of how these fields work. And hopefully I've made a case for why you should just always use expression. Like there's no real need to do fixed because if you think about it, like you could just write the same fixed thing. Like if I wanted to type, hey, think I have something for you regarding marketing strategies. And I just wanted to send the same thing every time. I could do so with the expression view of the field, right? Like same thing. It's just here I also have the ability to u modify uh code a little bit and like do something if I wanted to. So, I personally am always going to be using this moving forward. And the reason why, sorry, it can be kind of a lot to see if you muck around with. Just always put a dollar sign first if you're referencing data from the previous node. Uh, and the reason why is it's just going to be the easiest for me. Um, you know, you get all the variables up here anyway. It's no big deal. So, we're doing subject. Stick that in there. We got the subject line. We are good to go. So, uh, I think we probably learned a fair amount about the fixed fields and expression fields at this point. Um, the last thing I wanted to cover, I say mapping different field inputs here. Last thing I wanted to cover was uh if we go back to Gmail here and if we scroll down to actually this is a bad example. Why don't we go back to open AI? You see here how um I selected the RO field and it was fixed and then I selected from a dropown user assistant system whatever. Well, I can actually just go expression and what you see here is this is just text that we are feeding this API the uh the node. We're actually just writing user. So instead of writing users, you could also write a system. This fix stuff. This is just made to make our life a little bit easier. But anytime you want to get the actual data representation, just move over to expression. So what are some examples of that? Well, like you see these simplify output and output content is JSON fields. We go to expression. You'll see that what we're actually passing is we're passing this true value. We're literally just writing true. Um output content is JS. If we move it to expression, you can see we're actually just passing true here for output content is JSON. The point that I'm making is um all of this is basically just um you know if we just strip away all of the basic simple stuff. You can start getting into what's actually going on behind the scenes. How we're actually communicating with these nodes. The way that nodes in practice is we're sending the term true. Now the reason why this one has quotes around it I believe is because um true is a special case. It's called a boolean which I'm about to cover right now. Um so I believe you need these quotes just for NAN to like not bug out at you. um just one of those unfortunate peculiarities of the platform. But that's that for fixed fields and expression fields. Okay, next up, let's talk more about JavaScript object notation. And if you guys already know JSON, if you've already experimented and understand the various formatting types available here, feel free to skip on until we move on to how data in N8 is represented. Um should be about 15ish minutes or so. Uh but for everybody in the audience that doesn't understand JavaScript object notation, I just want to go real deep into it and I want to make sure you understand everything about JSON because ultimately as opposed to a lot of other no code platforms, NAD it doesn't ignore code or try and shove code away. It actually embraces code. So understanding a little bit of some of the um code types like JSON for instance makes you way more powerful. This is really what like everybody that makes money with this platform uses. We just use JSON to send data back and forth. Um, and you know, if you don't know JSON, everything's going to be a little bit trickier for you. Okay. So, what's the best way to um intuitively understand JSON? Well, what I'm going to do next is I'm going to walk you through the various data types in JSON. Uh, I'm going to give you just like a brief little um structure and format, and then we can actually just walk through JSON kind of step by step depending on different variables and stuff like that. So, I'm just going to go and type in JSON formatter. This is just the simplest and freesting platform I found online. As you can see, we got tons of ads in the middle here. you don't pay anything to use it. But basically what's happening under the scenes is behind the scenes is it's running um with every time you press a keystroke it's just double checking to see if it's a valid JSON. This is actually pretty useful for us because we can verify if something is indeed JSON or not just by looking at it. This isn't the only way to do it. Obviously we could do a ton of different things. We could I could open up like a code platform like VS Code or something and do the same thing. But I just wanted to keep this thing accessible for all of us. All right. So, zooming way in here, um, just so I don't get ads in my face 24/7. So, what is JavaScript object notation? Basically, JSON um is just a way to represent data in a structured and standardized format that minimizes the number of characters and it also minimizes the ambiguity so that when we send data to and from some API or something, we could do so as efficiently as possible. So the way that JSON works, the way that you send and receive data is based off of two um two concepts. The first concept is a key. So I'm going to write a key here. This key is going to involve my name. The first name is going to be sorry um the key called first name. So the key is just you can think of it as like the name of a variable. So the variable is called first name, but what the variable equals is a whole different matter. And that's where the value comes in. So first name I'm going to set to Nick. So this here is proper or well formatted JSON. It's not yelling at us or anything like that. It's it's actually good, which is nice. So this is an example of one of the simplest JavaScript objects that you could build. It is a onekey uh one value object. The key being first name and the value being neck. You've undoubtedly seen examples of this before if you tried working with any noode or coding platform um you know nadn included. The thing is in JSON there are a few simple but consistent formatting quirks that you just have to pay attention to. So the first is there are variety of different data types. Now as you can see here what I've done is I have this open curly bracket and then close curly bracket. I have the key name over here, a colon, that's just these two dots, and then I have the value, but what I've done is I've wrapped everything in these double quote signs here. So, first name, double quote, Nick, double quote. The reason why is because the data type that I'm going with here is called a string. It's actually a particular data type. A string is just like written text. Okay? So, this is if instead of first name, I say ID. This here, this is a string data type, but it's not like you don't only have strings available to you. Although a lot of platforms prefer strings, you also have a variety of other data types. Here's another data type we have access to. This is a number data type. It's numeric. So, in order to um you know send and receive numbers, you don't actually have to wrap them in quotes. This red here just corresponds to it being linted or formatted or whatever by this JSON formatter as a number data type. So this is still valid JSON even though we don't have the quotes around this. Okay, this is not valid JSON, right? This does not mean anything because we're using string characters uh and we're inserting it in something that the software that we're going to be using NN is going to be expecting to be a number. So basic rules of thumb are uh you can't just write a string without wrapping it in quotes. And the unfortunate reality is numbers can be both numbers and strings. So three ways to do X. Maybe we call this titles, right? The three here, this is a number, right? But we are still wrapping this within the quotes of a string. However, this is different from this. And so in practice, the reason I harp on this is because in practice, this usually doesn't matter that much. No code platforms will take care of like the type conversion for you between number to string. Um, if you're using a number, if you're sending it as like just the three without the quotes around it, um, for the most part, that's okay. you'll be able to use this as a string later on. But in like pure JavaScript and a couple of programming languages, you can't actually just like if you wanted to, I don't know, run a function after this that added a number. Um, you know, maybe it was like here, let me show you number of things, then title template. Let's say I had some function where I wanted to add the number of things to the beginning of the title template so that it said three ways to create NAN flows, five ways to create N inflows, 15 ways to create NAN flows. Some programming languages wouldn't let you do that in which case you'd have to convert this to um you know a string and then you could just go number of things plus title template equals 15 ways to create NAN flows. Okay, great. So we've covered numbers. We've also covered uh and numbers I believe they have some fancy name int or technically they could be floats. It could be a number of things, but um in our case, we're just going to go numbers. Next up, I want to cover a couple of additional data types. So, one of the data types is called bool. So, a bool stands for boolean. Boolean just means zero or one, true or false. So, true. See how this just turned into orange instead of gray. Um true is an accepted bool. And you don't actually need to wrap uh quotes around this true in order for it to technically be valid JSON. Now, for the most part, I will always just wrap quotes around all this stuff anyway because, as I mentioned, they do type conversions and it doesn't really matter to me too much. But I just wanted you guys to sort of explain why sometimes you see stuff that is not wrapped in quotes. And that's one of them. You also have a few higher level data types. And one of the higher level data types um that I'm going to show you guys, I'm going to show you two. The first is I'm going to show you an array. An array looks like this. It's with square brackets. So, you have a left square bracket and then a right square bracket. And the other is another JavaScript object. And the really cool part about JSON is you can infinitely ne nest different data types within the values of uh of a key. So I could theoretically wrap an array which is just a number of things and then inside of that array I could wrap a number of other JavaScript objects that go infinitely deep. And I'll show you guys how to do this in practice, but maybe we'll just go items. Then here we'll go first name Nick. and we'll go Sally. Then here uh let's just do two so I don't run off the page. So now what we've done is we've created JSON where we have a key called items. Then inside of said key we have an array and inside of set array we have two more objects. Both objects have the key called first name. And then they have different values. The first has a value of Nick. The second Sally. Now, to make people's lives easy, usually the way that this works is you will do some level of uh formatting like this string formatting just to make your life really easy. You'll be able to see like where the item starts and then there's usually like a fair amount of indentation and I don't know the exact amount of indentation, but usually it looks something like this. Now, at a glance, you can kind of see the structure of this. It's sort of nested. You have um this top level array, and inside of the array, you have two objects. first name Nick, first name Sally. Okay, great. So, let's create a hypothetical JSON object. And just because I know the most about myself, I'm going to be creating this for myself. Let's hypothetically just create a JSON object. And let's just do like a user object. Okay. So, what I'm going to do is I'm going to go user. That's going to be the key name. Then I'm going to go uh colon. And then I'm going to create another object. Inside of my user object, I'm gonna have first name. First name is Nick. I'm gonna have a last name. My last name is Sarafh. Um, I'll have city. My city for the time being is Calgary. I'll have um foods he enjoys or foods. Let's just call it food preferences, right? Because if you say foods he enjoys, now you're um insinuating that has to be a he. And then what if you add a user in the future that is a woman? Do you want to change the key name? No, obviously not. So, food preferences. I'm just going to add an array. And inside of my array are going to be a bunch of strings. So, one of my food preferences is um I don't know, Thai. I'm going to have another one that is uh I don't know, like Japanese. Okay. So, I like Thai food and I like Japanese food. Apparently, I like absolutely nothing else. I have a very strict diet. Okay, cool. And then why don't we add um let's just add one more. And then I'm just going to add one called friends hypothetically. Um, but because as we all know, I have absolutely no friends. Um, and then inside of friends, we're going to add a another object. an array. And inside of that, we're going to have first name. We're going to have Peter, my uh dearest and longest friend, of course. Thank you, Peter. Then underneath that, we're going to have um last name Griffin because I am now uh love and family guy. And that's one of my friends. And then I'm just going to copy this over. I'm going to add a comma because you need a comma in between all items in an array and also object. So we have our object here. It's a pretty intense user um object. We have a first name, last name, the city the person lives in, the food preferences. We have a list of friends, right? The reason why I go into this much uh detail here to create this object is just because I want to impress upon you that you can make an object arbitrarily detailed. You can make it as detailed as a human being could ever possibly want. And a lot of the time you'll see that these APIs that you're accessing, these calls that you're making to these different nodes that return you data about a particular software platform, they will be really deep and they will have nested data like six or seven things uh levels deep essentially. So the easiest ones to use in my experience are just the ones that are all um you know surface level. You just have like a user object and then a first name and then a last name, a city and then maybe some food preferences. But in practice, you know, we need to be a little bit more capable. And understanding how data is structured in JavaScript object notation is uh probably like half the battle to be completely honest. So, I believe I've covered everything I wanted to cover. The last thing I'll mention is um you can't have a comma, I believe, be the last. Yeah, it doesn't allow you to have a comma for the last item in an object or an array. So, notice here how we have a curly bracket, then we have the key name user, then we have another object sign, and then we have the key name, and then the value. the key name and the val then the value. Between each of these, we have a comma, but just know that at the last item in an array or the last item in an object, you can't have a comma. So, you just need to remove that. If I press format or beautify, which is just my button that tells me everything's okay. Um, then you see that it is indeed formatted and beautified. Everything's all right. Okay, cool. So, that's the meandering um you know, that's sort of how to understand this stuff from a bird's eye view. Moving forward, what I'm going to be doing is just to make our lives a little bit easier. I'm going to be showing you guys in every module or every node I should say on what the JSON is of the input and the output. The reason why is it's just going to make it a lot easier for us to ultimately, you know, reference stuff and then do cool things with the data. So, I go back to our Google Sheets example from earlier. I'm just going to move the input, sorry, I'm going to have the output be JSON. And you'll see that in this case, what we have is we have an open JSON curly bracket. Then we have a bunch of top level keys. So row number over here, well that's a key. And you can see that it's represented as a two. Very similar to what we talked about earlier, right? You'll notice that the formatting here is a little bit different. Notice that this one has an underscore and then these other ones are all camelc case. And some of these are capitalized because these are within quotes. None of this stuff really matters. But just showing you guys that there's a variety of like conventions that people do. Some people like capitalizing things, some people don't. So my recommendation to you just to keep everything really clean and expected is just to pick one format and stick with it. If you like the underscores, use the underscores for everything. If you like the camel case like me, use the camel case for everything. But anyway, civility is Mrs. First name Amy. First name suggestion, it's empty. You know, when something is empty in JSON, you just use these quotes. Notice that every item here has a comma in between it. Last name, comma, full name, comma, right? Just going all the way down. And then if I zoom out, I scroll, you'll see that what do we have here? This was just one item. So, if I just click on this little blue left curly bracket, it'll actually nest the item for me. And you'll see that inside of this item are 38 other items. But, it'll just nest it for me. And then I can see the next one. Now, Right? We only have five. So, that'll be it. But now, I want to show you guys a little bit about how an format stuff because it's a little bit different than Jason I just showed you. But once you understand this, basically everything from here on will be child's play. Notice how the very first character that you see on the output page on the JSON is a square bracket. Well, if you think back to the example that I gave you earlier, if we click out of this um Pizza Hut ad, as much as I love Pizza Hut, this array here starts with these square brackets. And so we see over here we have a square bracket as well, that means that this is an array basically. So the way that data is represented in NADN is all data on the platform is represented as an array of objects. Okay. So if I scroll down here to how data in NAN is represented. All of it is an array of objects. All of it will basically be those two square brackets on the outside of a number of items. And you could have as many items as you want. As we saw, we had four back over there. And inside of those four items, we had an additional 38 items nestled inside. But all data is an array of objects. Meaning that that's how you send and receive multiple objects. Remember at the beginning we had one item. We just outputed a single variable there. Well, now after this Google Sheets call, we're doing four items. Those four items are represented in our JSON as an array inside of which is another object that has the key name row underscore number, civility, first name, suggestion, and so on and so forth. So all data in NAD is an array of objects. The number one gotcha that I always see when beginners start out with this is they end up not fully understanding how many objects the previous node is sending or the current node is receiving. And so then when they try and reference a particular item inside of the data structure, they find that it just doesn't exist or they need to like index it, they need to slice in to find a specific item inside of the array of items or some other problem. But this is the number one problem that I see in practice just running through reading like N8 and help threads and stuff like that people struggle with on this platform. So if you understand this the fact that all inputs and all outputs are an array of items represented as follows and now if you understand how to format JSON what JSON looks like and now further if you just make it so that you look at the JSON and not necessarily the table or the schema format every time you're sending and receiving data you will solve the biggest gotcha that people have in their net it's just going to be a lot easier for you moving forward to you know create flows you're basically going to take so much of the load off that most other people spend like hours and hours trying to debugging um right from the get- go, which is ultimately what I wanted to do here. Okay, great. So, now we know that data in NAN is represented as an array of objects. So, how do you actually reference that data? You guys remember earlier in that example here where we were using this Google sheet module, sorry, this Google sheet node, I should say. And if we scroll down here, we were updating a field using data not from the previous node, but from a few nodes back. Well, basically if I go back to JSON here, what you can see is we can go one node back to open ad, two nodes back to Google sheets, three nodes back to when clicking test workflow. Okay, let's just go to this Google sheet. In order to reference data, actually in this case, we will use the schema cuz you can actually see sort of them logically top to bottom. In order to reference data more than one node back, what we have to do if we just want to redo this email one, we go curly bracket dollar sign. Then what we want to do is we go this left round bracket. Then we have access to all of our earlier nodes. We can select the current node open a but we can also select Google sheets and when clicking test workflow we want is the Google sheets. And then if you just dotindex if you put a period then you'll get an item. And the reason why I bring up this item is because basically all of this stuff is buried under this dot item syntax. And it's kind of annoying but in order to reference I don't know the row number what we have to do is we have to reference the Google sheet. Then item. JSON. Then we have to reference the row number. Why? Because NAN actually hides some of this information from us. We don't actually see the nested data structure. All we see is the nice representation of it. Okay. So this isn't actually represented this way despite the fact that it looks kind of like it is the way that this actually looks kind of underneath the scenes behind what NAD is showing us is it looks like this. And if you understand this, you'll understand how to reference basically any item in NAN. So this is equivalent to what we're seeing here. What we actually have is there's an additional two layers between us accessing the data through the dot thing. It's actually item. json and then we have the row number stability, first name, suggestion, stuff like that. Okay, so that means that when we want to access it kind of just looking over here, the very first thing we have to do is we have to reference the Google sheet. Then think about it logically. Then we have to reference the item. So go item. Then we have to reference the JSON. js. And then we have to reference the row underscore number if we want to like that. And then we get that data. Okay. So big issue, big misunderstanding I would say with Naden. Um unfortunately their documentation in my experience is not clear enough to really elucidate what's going on here unless you have a programming background. Um but just want you guys to know that this is how it's done when you reference nodes that are more than one module back. If you wanted to just reference something in the previous node, it's a lot easier. You go dollar sign and then you go jso n. then you can just go immediately into the variables. Message content or something like that dot subject line for instance. All right, so let's start looking at some of these foundational nodes over here because I think at this point we've done four different workflows and I think you guys probably have a reasonable understanding of how to put a workflow together now as well. Some of the more nuanced portions of NAN like the way that they do their JSON, some of the nuances behind burying names inside of various keys and how to do arrays of objects and backtrack and all that fun stuff. Let's actually cover like nodes because nodes are ultimately what you guys are going to be using on a daily basis. After we're done that, then I'll rebuild this flow up here and I'll show you guys how all that stuff works under the hood. This will allow us to use some of the more um I guess code oriented features of N& N. Then I think after this point like you guys have a reasonable enough understanding to build most things. It's just a matter of like what can you build and how exactly do you put the various Lego blocks I've shown you together. That's what the subsequent videos in this course are going to ultimately just show you. just going to be non-stop building 24/7, baby. Anyway, so let's cover these foundational nodes. The first thing is we're going to cover nodes for doing things. So, HTTP requests, web hooks, and open AI nodes. And we'll cover nodes that modify flows. So, if filter, merge, split into a batches. There are a couple of other ones, but these are the ones that we're going to be focusing on, at least for today. So, you know, in terms of nodes that do things, um, if you guys aren't familiar with, um, HTTP request, this stands for hypertext, I don't know, hypertext transfer protocol. And what we're doing when we do an HTTP request is we're doing the same thing that your browser does when it goes to a website. So just like when I go to google. com, I'm sending a request to Google, receiving a bunch of information what this web page looks like, and then I'm using my browser Chrome to render it into the beautiful, wonderful image you have in front of me. The HTTP request does the same thing. It just returns the thing to you in the code. You don't have that rendering portion. So I'm going to do a get request here. And the get request is just the simplest and most basic request. And this is ultimately what you're probably going to be using for most cases. And the URL I'm going to be doing is just my own. Leftclick. ai. So leftclick. ai just for the purposes of this discussion. Looks like this. We build hands-off growth systems for B2B founders. There's a bunch of information about what our clients get, our leads. The lovely Joe Davies left me a testimonial. I should probably touch this website up at this point. It's been the same for a while. But anyway, I'm just doing an HTTP request using a method get to leftclick. ai AI with no authentication, no nothing. We're just going to see what happens. So, let me click that test step. And the response that we've received is again an array of objects. Our one object has a key name data. Then inside of that key name, you'll see that there is a ton of code, HTML it's called. Okay. And just because this is not the easiest to see here, I'm just going to go to schema view so you can see a little bit more of it. But basically, this here is the code of my website. This is what my website looks like to the browsers. um you know and like what ultimately my browser uses to render it. You can see it says leftclick space vertical carrot space AI and amperand. This is just a symbol that um allows it not to break any characters process optimization. Left click is an AIdriven performance optimization agency is cutting edge tech to scale your company. This is all the code right it's pretty badass. If I were to go to leftclick and then inspect. If I were to open this up this is like the Chrome dev tool which allows you to see the code of the website. It's the exact same thing that's over here in the bottom lefthand corner, right? Literally no differences whatsoever. You built your agency, we'll scale it. The only difference is some symbols like the amperand or maybe the at sign, they just have like little replacements here or there so they don't break any um string formatting. So that's the HTTP request. Why does an HTTP request matter and why would we want to do an HTTP request? Well, the reason why is cuz you can do pretty cool stuff with this. Like if I go um HTML and then I say extract HTML content. What we can do is we could use the code of the page to pull out all of the text. Okay. So why don't I just go text? I'll go P. Uh let's just do H1, H2, H3, H4, H5, H6. Let's do P and return this value as text. And then I'm going to return uh let's just test this and see what happens. What we're doing is we're basically feeding in the HTML and then we're using various CSS selectors they're called to extract a bunch of text for us. And it looks like I didn't do this right because we're not actually retrieving anything. Let's just try P for now. We'll do this. You're seeing already that we've extracted a bunch of P text. We've extracted basically like a specific type of tag inside this website that starts with a P. Okay. Now, we can do the same thing with a variety of other tags. Let's say I want to do H1. This is going to return all of the um top level selectors basically. So, let's do H1 test. That's pretty cool. If I return an array, let's return all of them. A better way to build ops. What our clients get. If I do uh I think if I want to select multiple, I just would I just do P like this. H1 and P. Yeah, there we go. Okay. So now I'm getting all the text of the website. Find the perfect offer. Automate your lead acquisition. Solve your project management. This is all the text of the website. And all I needed to do in order to feed this in was I basically went through and then I fed in a bunch of elements which I know just correspond to text like this. And then I pressed test step. It went through and it extracted all of them into this big fat array which is really cool. Now like if you think about it, I could actually do some pretty cool stuff with this. I could have AI tell me something about the site very easily. You're a helpful intelligent uh website scraping assistant. We'll go got to add my credential first. We'll go down here and I'm just going to have AI tell me a little bit about this website. Return in JSON just some data. I'll do that as my system prompt. And then here, your task is to take as input a bunch of scraped website text and return as output a JSON that follows this format. I'm going to say summary. three unique points. We'll do an array. We'll do um probable customer demographic. Okay, I'm going to have it return an object. we'll see contact information if any and then we'll do I don't know some sort of like array of objects. Okay. And then that's what we're going to have it return. We're going to output the content as JSON and then as input you add a message. What we want to do is we just want to join this is um being output as an array right now. Right? Arrays as we see are many options or many different um things on various lines. What we want to do is we want to take all of this output data. We just want to turn it into one big long string. The way you do that here is I would use the expression tab. And then I'm just going to add some lines so the AI knows that this is my input. Then I'm just going to go JSON. ext and then dot we'll just go join. Join is just a way to convert an array of items into just one big long string. And the thing you put inside of the join is you just put what you want to separate it by. So in my case, I'll just separate it all with a new line. If I enter the detailed editor here, you can see that now I've just turned all of this into one giant long string. I'm going to feed this into AI and I'm going to have it just tell me something about the website. So, this took us just a few seconds and we already have a scraper that's basically capable of coming up with and then outputting a summary of what I do, three unique points that separate me from the competition, probable customer demographic information maybe I can use to do something, then contact information if any. And it looks like it separated that into a subobject that says method, book a call, details, get started today, platform, website. As I'm sure you guys could imagine, I spent 15 seconds putting this puppy together. If you guys wanted to maybe scrape emails or do something like that, you could put something together that does this pretty easily. So that's the HTTP request node. Pretty simple, pretty straightforward. The next thing I want to show you is I want to show you basically the inverse of the HTTP request node. Instead of us sending data, I want to show you a quick and easy way that we can receive a little bit of data if necessary. And it's nowhere near as hard as you think. There are a variety of different ways that you could do this, but I'm going to show you guys a really simple and easy one that I personally use all the time. It's called the web hook. So, I'm going to go over here. Then, I'm going to type in web hook. Starts the workflow when a web hook is called. Basically, what it's just a server URL that you spin up, almost like my website leftclick. And every time that server gets a request to it, it'll show up here with all of the data. Why is this so valuable? Well, it allows us to do a million things. Connect workflow to workflow, add up and create our own API integrations, do a variety of things that otherwise we wouldn't really be able to do without them. This is basically the glue that holds the internet together. And this is a quick and easy way for you to make your own piece of that. So, this is what the web hook fields look like. We have a test URL, production URL. Don't worry about the distinction there for now. We'll just use test URL. HTTP method we want to allow to access our service. Think we might actually allow multiple. One of these I think allows us to do multiple, but we're just going to go with get for now. The path, we're going to leave the path as fixed. The path is just this URL string. Authentication, we're going to turn this off. We don't want any authentication. They recommend that you have authentication, but for simplicity sake, I'm just not going to have any because otherwise it might be a little bit too much at once. Then we're going to do respond immediately. What I'm going to do is I'm going to set up a test event for this URL. Okay. Actually, I don't know if I can do this with the same workflow. Actually, no, I can't because the workflow is already running. So, I'm actually going to make another workflow really quickly. And I'm going to use that to call this web hook to get some cool data. So why don't I create a workflow. We'll just call this three NAN concepts. And then we'll go web hooks HTTP requests. I'm going to make a new HTTP request here. It's going to get this long thing that I've done over here. Okay, I'm going to test it. Then over here, I'm going to grab that data. And as you can see, like what just happened if you were paying close attention is this just ran the second I sent the data from this node which is in another workflow over to on the current workflow. We received a ton of info and the info that we received was received a big object one item inside of our array of objects here. So a single object with a key name headers which had a bunch of other data underneath params query body web hook URL execution mode test. So this might look a little bit dry and a little bit boring to us right now. But what's the value here? The value is you can run something from one workflow, send it to another workflow really easily. So in our first example, we sent no data. But what if I go back to my other workflow? Okay, that's number three here. And then I send some query parameters and I say first name and I say Nick. Then I go here a last name surive. Then here maybe I go UU ID. That's just like user ID. And then I type something like this. Now, if I go back here and I listen for a test event, I go back here and then I send the test event. Now, when I receive it, not only do I get the headers, sorry, the headers over here, not only do I get the params inside of our query are the variables that I just sent over inside of JSON. First name, last name, UU ID. So, man, I can do so many cool things with this. It's crazy. I could connect this to any web service out there. just give this URL as the URL where the events are sent and then voila, I basically have like an infinite machine. I'll show you guys a quick example right now using ClickUp, but you guys can extend this example and do whatever the hell you want with it. So, ClickUp is just this project management platform that I have that allows you to send out web hooks. And most services that are good will allow you to do stuff like this. But you see there's a call web hook feature here. I'm going to go back to my web hook and I'm going to copy this URL. Now, this is for get only. I don't actually remember if ClickUp sends a get or a post. So, we're going to see what happens here. And then what I'm going to do is I'll just have status change. We're going to say from specifically hook to specifically outline. And we're going to create this. And basically what this means is when I change a field called status inside of my project manager, it sends a web hook over to this address. So hook to outline finds something that's hook. Maybe this one here. Now what I'm going to do is I'm going to go over here, listen for a test event. Then I'm going to change this to outline. And I don't remember if it's a get or a post request. So we might have to wait and try two or three. This unfortunately does not respond to the methods that are not the same. methods are, you know, kind of a deeper story, but basically there are variety of ways you can query a website or a web service. HTTP, most commonly, you'll do either a get or a post. Um, in the specific instance of ClickUp, it looks to me like they're probably using post. So, I'll go HTTP method. I'll go post. Now, I'll listen for this test event. I'll change this back to hook. Then, I'm going to outline. And now that it's outline, um, we're waiting for this post request. Let's see if we are receiving it from ClickUp. Okay, great. Looks like we received it. Now, what did we receive in reality? Well, excuse me. I'm like really close to sneezing, but I've decided not to, so I won't. Uh, before we updated the query field, now we're updating the body field. We see it's built into ClickUp. What we did, ClickUp has a number of default things that they send over when you do this web hook integration. One of them is the ID of the record. Now there's the trigger ID the trigger payload ID the name of the thing which was three chat GBT prompt engineering hacks you need to start using next some hooks some order in text some information about the person that created it the point that I'm making is we just created our own integration with ClickUp and it took me like 30 seconds realistically after I got off the HTTP method hump so in practice sometimes APIs like ClickUp I'm sure they have API documentation somewhere but sometimes they just don't tell you right at the point of creating the web hook whether it's going to be a get or a post request. So if your get request doesn't come in, just change the HTTP method to post and then rerun it with the post uh example and then one of those will work, which is pretty cool. Okay, so that's how you do it as a test URL. Ultimately though, we don't just care about having this workflow be sort of a test. We want it to be live. And so when your workflow moves to production, aka you publish it and you make it live to actually interact with the internet, you're going to want to go over to this production URL and then copy this URL and update all of your web hooks to send here. This will enable you to activate this. If it's in test mode, you actually won't be able to activate this. Um, just I guess for safety or security purposes. It just makes the transition to publish it unfortunately involve an additional step, but it also makes your workflows a little bit more secure. So, that's how you do web hooks. The last thing I'll mention are these OpenAI and AI nodes. Now, I'm going to have many, many videos after this one be all about AI agents since that's obviously the big thing that's blowing up right now. What I'm going to do here is I'm just going to cover them super briefly. I'm not even going to like run anything, but I just wanted to show you guys that NAN is very AI native. And so whereas I've been doing some very basic um OpenAI calls with this OpenAI module, there's a variety of things you could do. You could create an AI agent which generates an action plan and executes it. Uses external tools. You can have OpenAI message an assistant or GPT. This is what we've been using. Some basic LLM chains and a bunch of like specific tools that are used to do things like categorize information, summarize information, and so on and so forth. AI agent. Just to give you guys a very brief example, probably is one of the most intimidating looking modules or nodes, but it's actually one of the simplest in practice. When you create an AI agent, it'll automatically open up this when chat message receive node on the side. And then you'll see that down at the bottom of my screen, there's an additional button that allows me to chat with my model. But in order to make this work, what we need to do is we need to hook this up. Notice how there's this little um warning here. What we need to do is we need to go down to chat model. We actually need to select the AI module or AI node that we want the AI service I should say that we want to use. For most intents and purposes, I use OpenAI. This is just the best to me. But you could use O Lama, Mistral, Google, Gemini, Anthropic. Feel free to play around with this for whatever your use case is or whatever your data privacy security requirements are. So, I'm going go down to my OpenAI chat model. The model I'm going to be using for this is going to be GBD40. I just find it has better answers. And then now you'll see that the warning sign is gone. Now we have an additional node I can drag and drop here. And now I can go to chat and I could say, "Hey, how are you doing? " We've essentially opened up our own chat window. Hey, I'm just a computer program, but I don't have feelings, but I'm here and ready to assist you. Thanks, Chat GPT. And on the right hand side, probably the most valuable part about this is you could see a log of what is happening and how many nodes were called in order to get you the result. This is important because the whole point of AI agents is their ability to call other tools to do things for you. So this says system, you're a helpful assistant. human, hey, how are you doing? This is just their prompt setup. So, the input to OpenAI was this right here. And because we just asked how it's doing and so on and so forth, um, you know, it didn't really do anything special. There were no additional tools. This is the same thing as you just sending a message to chat GBT essentially. Now, in order to make an AI agent like really work, you're going to want to add two things. The first thing is you're going to want some sort of memory. If an AI agent doesn't have memory, then basically if I go back here to chat and then say, "What did I say in my last message, it will have no context or no idea. I don't have any access to past messages or personal data. Each session is independent. " Basically, this is like a one it's like a send a question, receive an answer sort of window. But we don't want that. We want this to actually have access to our chat history. We want it to see what we've been talking about over the course of the last like 20 or 30 minutes and be able to reference those. So, there are a variety of ways to do this. Basically, you need to implement some sort of database. I'll show you guys how to implement some more complex databases in the future, but the simplest one, the one that Naden provides right out the gate, the one that most people on YouTube are going to be talking about is this window buffer memory. Window buffer memory basically just allows you to store it here inside of the test window, which is the easiest to do. And the default is five messages. So, u basically every time you send it a message, it will send up to five pass messages. But just for the purpose of this discussion, I'm going to go 10. So now what I'm going to do is I'll go to chat and then I'll say hey how are you doing and I'm going to say what did I ask you in my previous message and you'll see that it's asking how it's doing right so we're actually now accessing the previous message using this buffer memory and on the right hand side you'll see here that the log has gotten a little bit more intense too so basically we called the AI agent up at the top the next thing that happened was we went through the buffer memory we fed this in and basically ally added this to some big long stack of message history. Then we fed that in plus the previous message. Then we said, "Hey, what did I ask you in my previous message? " This is the input that is currently being fed into the model on that second call. And then we went down into buffer memory. We saved all of this again. And then uh we kind of came up and then and sent the answer. How about now? Cool. So now we're three levels deep. And as you can see, this is just a quick and easy way to load the memory. make sure that we're always having some sort of topical contextual conversation, which is pretty cool. Now, the real juice in AI agents, the reason why they've gained so much popularity is just because of this tool section here where you can essentially call an action just like we were doing before procedurally, but you can call it using AI and you can automatically format and get information from different tools. So, there are variety of tools that are sort of set up for you. Air table, base row, calculator, Gmail, Google calendar, Google Docs, Google Drive, all this stuff. What I'm going to do in this example, just because I don't want to spend all day on it, um, before I do my more detailed AI agent tutorials, is I'm just going to select Google Calendar. I'm going to create a new credential, sign in with my Gmail account, and then I'll go over here. I'll close this window. Now that I've uh I've connected my Google calendar agent, what I'm going to do is I'm going to select my specific calendar, which is Nick leftclick. ai. And then now that it's connected to my AI agent, this is the create event. I don't want to create. I just want to get so get all of my events basically from my calendar. Okay. And then in addition to that, what we have to do is we have to use this dollar sign from AI feature here. And we just have to paste that into the expression field. Um what this does is this just tells AI, hey, I want you to provide uh you know your own details for the value of a field. In our case, I wanted to feed in some options that say, hey, you know, I want you to grab data that is after this date but before this date. So, if I'm asking AI like, "Hey, what's going on? Can you tell me what I'm doing tomorrow? " And I'll just say, "Jan 28, 2025. " The whole idea here is AI now has access to my calendar. It also has the ability to call that API. Then it can actually go and retrieve specific events from my calendar and then return them here. So, as you can see, I have this call uh me and my buddy Zach, and then it just has all of this information. Okay, great. How are you more generally? you know, I can also just chat with it like I'm chatting to chat GBT or something like that. So, I don't always have to use like the tool that I'm calling. And the idea is you basically stack on three, four, 5, 10, 15, 20 of these tools. Although, I find in practice when you get um past maybe six or seven, instead of calling a tool, what you want to do is you want to call another agent which then decides to call a tool. It's basically like a big almost like a search tree or something. But anyway, that's more or less the AI side of things. The last thing I'll mention here is uh this OpenAI note doesn't just have the message a model text action like we've been doing before. There's a variety of other things you could do. You can create an assistant, delete an assistant, list assistance, message assistance, update assistance, analyze images, you can generate images, generate audio. Like you have a ton that you could do here, which is pretty sweet. I go down to generate audio and I'll say um I don't know, Nick is awesome and very pretty and I generate it using the Nova voice. Click this test step. Not only can I generate text and stuff like that, but I can also have this generate me an audio output I can then listen to. Variety of cool things you could do with this. Nick is awesome and very pretty. You're damn right I am. Um variety of cool things you can do with this, but definitely don't sleep on the AI nodes. Uh you know, don't just like stick to the one that I've shown you guys so far. Okay, great. So, those are the foundational ones. In practice with NAN, you're probably going to be using these quite often. What I'll do next is talk about some nodes that modify flow. And here a bunch more that like you're going to want to read their docs and add them to your toolkit because this is like an everyday sort of thing. The first is an if. The second is filter. The third is merge. And the fourth is split into batches. So let me show you guys a very quick and simple example of the if. If I go here and then I add my own trigger. Uh and I just want to trigger manually. Oh, sorry. It looks like I already have one somewhere in here. Right. So let's just repurpose this um manual trigger for an example workflow that I'm going to build down below. this example trigger. I'm going to click it. And then what we're going to do is we're going to use the edit fields. We're going to go down to JSON. This is just a handy dandy tool that allows you to set your own inputs and outputs. So I can now set my own um output. And I could say first name Nick, last name uh Sarif. And if I test this, if I test my whole workflow, you'll see this broke because um Sarif was not in quotes there. Got to make sure that all of your strings are in quotes. If I check out the JSON, you see the output of this module is now first name, Nick, last name, survive. Okay, I'm just going to pin this. Now, let's say I want to do something else, you know, if the if the input is Nick, I want to do something really cool. I want to provide prize. I don't know, um, $100. If the input is Nick, I want to go through my sequence and then I want to generate another variable called prize, and I want to have it be $100. Okay. But I only want to do that if the input is Nick. If the output is something else, then I want to have my prize be just $5. So Nick gets all the prizes here. He's very greedy. Okay. So how do you actually implement this sort of logic? Well, the simplest way is if I click this plus button and I just type if, you'll see I'll have this node pop up that says if wrote items to different branches, true or false. So I'm going to add that in there. And what I'm going to say is if first name, which by the way, we could still just drag if we wanted to. You can say if first name is equal to Nick, then proceed through the true node, which is up here. And if not, we're going to want to proceed through this false node. Isn't that cool? So now we basically have two things that are occurring. Okay, prize up here was 100. Prize down here was five. I'm going to click test workflow. This is now going to run. And I just want you guys to see what's happening. I clicked test workflow. We then edited our fields. We added Nick as the first name. We then went to the if and then as we saw here, first name was equal to Nick, meaning oops, if I double click this again, the output is now only going down the true branch with one item. And then the uh upper branch was illuminated. It's green. And then that's how we get to edit fields with the prize equal to $100. Now, if I change this, instead of Nick, if it's like Sally or something, and if we run this again, what you'll see is the data is flowing through here cuz I didn't unpin it. So, let's unpin it. If we test this now, uh what you'll see is the data didn't go through the top uh route anymore. Went through the bottom route. Okay, great. So, now with this example here, why don't we just pretend like we're emailing somebody. So, I'll go draft an email. create my thing here and we'll say congrats you won good expression. Then what I want is dollar sign Jason. prise right over here. So like congrats you won $5. C title loser. Uh okay, great. And then I'm going to go here and I'm going to create a draft. Voila. I have it. Now if I go back here to my SOS media queries page, see it says, "Congrats, you won $5. " There's nobody to because we didn't set the email to. But pretty neat, huh? We gave the same thing with this $100 field. Go over here. You'll see that I just copied like all of the same logic. It says, "Congrats, you won Jason. Prize. " It's grayed out right now because there's no data coming in. But you'll see that it'll work if I change the um input back to Nick. Then if I test this, I'll see it'll follow the top route and then also send me an email. Okay, great. So now let's look to use the filter node. Um, what I have here is I have my little first name Nick here. Um, what I'd like to do instead is I'd like to just change this a little bit. So, instead we'll create an array of names. And inside of this, we'll have Sally, John, and then Nick. Okay. And so now, if we test this, we see three entries in an array called names. We have our top level array, which contains an array of objects, and we have our object. And inside of that, we have key whose value equals a list of other objects and array of other objects. I know the terminology can be kind of a lot and unfortunately there are many ways to refer to the same thing. So if something doesn't make sense just bear with me here and we'll in a moment. Okay, great. So let's say what we want to do if this names array includes Nick then um I want to continue with the flow. So I'll go array. Then what I want is I want to see if this array of names feed that in here contains and I just want to say Nick. I'm going to test this out. What you'll see is that we've kept it because it does in fact contain Nick, which is pretty cool. If instead we wanted to see if it contains Peter, we test this. You'll notice that we are now following the discarded route. Okay, there's kept and then there's discarded. The thing is um it just goes down the same flow. Whereas if sort of split into two, there's a true and a false route. Uh this one actually just like continues and proceeds down the same flow. If something matches the filter, it will continue. Something doesn't match or the fil match the filter, then it won't. So what we could do if I just paste in this Gmail node. So we could basically build uh a very similar flow but what we could do is if it is kept then we could send the prize of $100 instead. We could hardcode that $100 in. So I'm going to do is I'm going to check to see if it's kept. Okay. So right now name contains Peter. It's probably not going to be kept, right? So test workflow. It's going to stop right here. Does not proceed any further. If instead I change the filter so that it contains Nick, then we test it. we see is we're going to move on and we're actually going to like proceed with the rest of our flow. So, this filter here just allows us to kind of stop if it doesn't match our condition or continue. And you can add as many conditions as you want. You can go and or um you can add whatever sort of logic you'd like. I used array logic here too, but there's also a lot. There's like string. You can check to see if something exists or matches array X, number, date and time, boolean, array, and then there's also a bunch of object ones as well. So, that's filter. Pretty straightforward, I would say. The last thing is two more. There's one called merge and then there's split into batches. What I'm going to do here is I'm going to have two routes or two outputs of a module and then I'm going to combine them back into one. And I'll show you guys what I mean with this. Remember earlier how we had some HTTP requests. What I'm going to do is I'll go first name Sally and I'm going to have let's just say second name. Actually, let's go person one. Go person two. Just for the purpose of this example, we're going to have two people. Person one, Sally. Person two is Nick. Okay. Next, what we're going to do is we're going to add an AI node. You know, down to open AI, and I'm just going to message a model. Then here, I'm going to say write a detailed fun story about what I'm going to do is I'll go I don't know, person one. So, JSON. person one. This example is sort of silly if I'm honest because we could just hardcode the names in there. But I just wanted to do this to show you guys how this logic of the merge node would work. I'm going to select GPT40. And then I'm, you know, because this is just a very quick and easy example. I'm actually going to add a user prompt. Okay. It's now going to go and it's going to write me a fun story about Jason. One. Person one was Sally. So, we're going to see it in a second. Very fun. Thank you very much for the detailed story. Q Jeopardy music. Q other elevator music. All right, it's taking its sweet ass time. Could be for a variety of reasons. I might have like a little bit of rate limit action going on my end just because of all the examples that I provided, but could also be something else. I don't know. Let's see here. Okay, cool. Looks like it did. Once upon a time in the vibrant city of Elmssworth, where the streets hummed with the rhythm of hopeful dreams and endless possibilities. Okay, so we just uh you know we just wrote a cool story about Sally. What you can do in N8 is you can actually connect the same output to multiple um multiple future nodes. So what I've done is I've you know I have one over here which I'm going to rename write story about uh Sally. Then I have another one over here which I'm going to say write story about Nick. Then down over here, I'm going to say write a story about person two instead of person one, which is, you know, now going to equate to neck, right? If I test the step, same thing's going to happen. It's going to call GPT40. It's going to write me a cool story. The issue is if you think about it logically, we now have two routes. We have one top route that writes a story about Sally, another bottom Neck. Um, so if we wanted to do something with these stories, like I'd kind of have to repeat the same logic up here. Let's say I wanted to email this to somebody. Well, I'd have to Gmail up here. and I'll set to Gmail down here, right? I have to duplicate it. And it then provides a pretty simple and easy built-in way to avoid that. It's called the merge node. So, you can merge data of multiple streams once data from both is available. So, if you just click on it, you'll see that there's a mode append or combine or SQL query. I'm just going to stick with append for now. I'm just going to feed in these inputs. And in this way, what I can do is I could actually just write, you know, one Gmail node here instead of two. Um, and maybe I could like append both of these stories or something. But let me actually show you um what this looks like. Now, I'm actually going to test this workflow from end to end. So, you see it first does the top route and you can see this is orange because it's like filling out the um story about Sally right now. This is currently active. It's waiting to fill in the merge the second this is finished. And I think I probably should have set some character limit to the story cuz now I'm thinking about it's probably writing a lot. RIP my tokens. And then after it's done with the story about Sally, it's gonna go and just gonna do the same thing. Write a story about Nick. Um, hopefully this finishes before the next ice age. Okay, that took way too long, but uh, just make sure you put in some sort of limits next time you do one of these calls. Otherwise, you'll be waiting here until the end of time. However, what we see as our final product is the top route completed and then populated the merge and the bottom route also completed and populated the merge. Now we had one item from a top route, one item from the bottom route. Then we also carried forward one end from the top route, one item from the bottom route. But what you'll see is the output of this merge is now two items instead of one. The reason why it's two items is because we use the append. So now we have uh you know the story number one and we have the story number two. Basically we don't actually have to just output two things. We could actually just output like one item instead. Um but because in N8 um outputs are arrays of items, you kind of have a choice there. Now, since we output two items, what we could do is we could add our little Gmail node. Stick that down here. Connect it. Then I'm just going to pin this. And what I could do is I could email myself this story. I could say story about just cuz I back myself into a corner here. I need to write kind of like a little bit more difficult of a line of code. But I don't have access to Nick here, right? Like a single variable that contains the value that I'm looking for. This is person one, Sally. Person two, Nick. So, I mean, I could select person one, but then my second run would also say Nick, right? So, both of these would say story about Sally, even though they'd have different stories. So, what I'm going to do is I'm going to say I'll look at the actual story. So, I'll go uh sorry, I'm going to go to the merge. I'll look at the actual story here, which now that I think about it is actually just JSON. Then I'll go message. I'll go content. Then if it includes the term Nick, then I'm just going to return Nick. Otherwise, I'm going to return Sally. That's how that works. So if this contains Nick, I'll return Nick. If not, I'll return the ter I'll ret I'll return the term Sally. This is just a kind of a shorthand way to use the if um else logic. Same as what we had before. So, I'm just going to pop this puppy open. Um, and let me take a look at the data. Story about Nick. Nick was an ordinary guy with an extraordinary dream. I wanted to become the first person to ride a unicycle all the way across the United States. Then Sally, uh, Once Upon a Time, the quaint town of Lavender Hill, I think. Uh, this was the one where I timed out or something because of the rate limit, unfortunately. So, it doesn't look like it generated me anything more than Once Upon a Time, the quaint town of Lavender Hill. But, I'm sure we could. Yeah, like I could rerun this. Let me just make sure that the prompt is a little bit shorter. Less than 100 words. Let's just do that. And then good. Uh, awesome. We should be good now to actually produce this puppy. Let me just go over here and delete these examples that I don't need in preparation for the next run. And cool. We warmed up two Gmails. Now we have a story about Sally, who's a curious hamster. And then I am a curious inventor. Lovely. Wonder why uh they use the term curious both times. But anyway, I hope you guys see now that like basically the merge connects two things together. The if statement sort of does the opposite. It kind of creates two routes, right? Yeah. This is kind of neat when you contrast and compare them like that. So I believe now uh we have everything we need except for the split into batches run. Split into batches is kind of a little trickier of a thing to conceptualize. So I'm going to show you a real example from a source that I used to extract a bunch of data. So let me take a quick peek here at um I think I was doing depersonalization system. Yeah. So I created a video on a depersonalization system a while ago and as part of it um what I'm doing is I'm waiting over here for data to come in through a web hook. I send in data to this web hook and then I use it to call an API that gets a bunch of data set items. The data set items are pretty big, right? As you can see over here, it's a bunch of data about specific leads. But notice how it says 128 items above, right? Anytime you output more than one item in N8N, what you can do is you could loop over every item. Then you could perform something individually on just that item. And then once you're done with that, you could go back to the loop over and over and over again until you're completed. So in my case, I had a lot of items in this. I had 128 for Christ's sake, right? And what I wanted to do is I wanted to run my five column personalization flow similar to what we saw earlier. And then I also wanted to add a row to my spreadsheet. Now unfortunately every time I did that I consumed one API uh call and a lot of these platforms have pretty intense rate limits. So instead of me um one issue I always found I found very frequently was I just kept on getting timed out. It would say 400 error or 403 error or whatever. Basically, the gist of that is that, you know, I'm over the rate limit and they're not going to allow me to make any more requests for a certain amount of time. So, what I did instead is instead of me just submitting all of those requests simultaneously, I added this to a loop over items and then I added a designated weight node. The weight node is a simple node in N that allows you to wait for a certain number of seconds. In my case, five. And in this way, I was able to basically take one item, go from start to finish, wait 5 seconds, and then loop back and then proceed with my next item. And I basically just went, you know, one after the other after the other over and over and over again. So that's just to give you guys some context on what that actually might look like. If I go down here and I set um my items here, I'm just going to use a future a feature in NAN that allows you to automatically set like your own test data. So I'm going to say, you know, there's first item and then there's second item and my data for that. Where is the edit fields? Right over here. If I zoom way in, you'll see that I'm now outputting two items, right? So, what I can do is I can go loop over items, split in batches. You set the batch size to one. What it'll do now is it will go first item and then second item. What I'll do is when you add a loop over items, it immediately adds a replace me node. And this is what you're supposed to basically replace with the thing you want to do. So, in my case, I just want to wait 5 seconds. So, I'm just going to go over here and go wait. wait exactly 5 seconds. What I want to do is for the loop route, for the route that is going to be looping over my items. So basically for every item, you can think of this as I want to wait and 5 seconds. Very cool. Then the output of this needs to feed back into the input. This is kind of like the tricky part. So when you go through a loop, I'm going to click a test workflow. I'm going to generate two items and then I'm just going to pull one item out of that and I'm going to wait 5 seconds. I'm going to go to the second item and I'm going to wait 5 seconds. Is this going to do anything? No. But notice that there is both a loop route and then there's a done route. Basically in NAND once you're done with the loop route, it just automatically goes to the done route. So I could do something like this and maybe I send myself an email. Let's just draft. And then let's say um you know done looping. You successfully waited 10 seconds. Awesome. So, we're going to wait 5 seconds and then 5 seconds. Then we're going to Gmail. Okay. So, that's that. I'm going to test this workflow. That's the first 5 seconds here. And that's the second once this is done, we can now send over an email draft or queue up an email draft, I should say, which is right over here. I should note that I may have ran this twice. I feel like I just ran this twice. Looks like it's carrying all of these here. Oh, yeah. Sorry. It'll output all of the records that you feed it in basically. So, I fed it in two records and then the third run it went and then fed both of those records in as input to my Gmail branch. Um, what I could do is I could take these two items and I could convert into just one item by combining them. Um, and then I wouldn't have to deal with this, which is kind of neat. So, that's probably what I would do in practice. I wouldn't actually proceed here with two items. I would just do one. There's a really cool built-in way to do this in NAN just called execute once. So if you just go to the settings page of any node and then just click execute once, you basically stop the multiple executions regardless of the number of elements that precede it. So I just clicked execute once and instead of me sending two emails, now it's only going to send one. That's run number two. Then it'll go and it'll feed one item as an output. So if I refresh this now, instead of two, I'm only going to have one. Quick and easy hack. Um, and yeah, you know, because we're building stuff live, hopefully you guys get to see the applications of this in real time as we put something together. All right, so now I think we are at the point where we can realistically build out substantially more complicated flows. What I'm going to do now is basically run almost like a test of sorts where we're going to take all the information that I just tried to shove into your brain and we're going to use it to build out a flow that actually does something business worthwhile, a flow that I've sold many times before and a flow that's made people a fair amount of money. So this is what the flow looks like right now. I'm actually going to simplify it. I've decided to do it a little bit simpler just over the course of the last like 20 minutes thinking about it. But basically, just to keep things make a long story short, there's this service out there called SOS. Um, and I mentioned this at the beginning of the video, they send out like a query every day from journalists uh where the journalists are looking for people that match their criteria to answer questions. So, uh, for instance, you know, this one up here is from Jordan Rosenfeld who's saying, "Seeking healthcare Medicare specialist weigh on how RFK Junior and Dr. Medat Oz might affect benefits or healthcare if appointed. know that this is usually like US specific. I think they have kind of like a global arm sometimes too, but most of this is going to be USD uh US specific. And then it says, "Hey, um I'm looking to speak to people in a nonpartisan way, but the possible changes, things like Medicare, healthcare, insurance, these are two stories. Specify what you're commenting on can be both robust and longer answers are prioritized. You must have the relevant experience. We'll link back to your site. Please include pronouns. " There's a lot going on here, right? Basically what we want to do is we just want to take this whole long email and we just want to extract all of these. So this would be one, this would be two, this would be three, and so on and so forth. And we want to feed this into AI. And we just want AI to give us a very simple answer. Hey, is this relevant to me based off of some characteristics I'm going to give you? and two, if it is, can you like pre-draft an email for me? So, pretty simple, pretty straightforward stuff, right? Let me show you how straightforward this flow can be given what you now know. And I want you to treat this as like a test, basically. Like, you've made it this far. Let's actually see if you could build something out that's business worthwhile. If something that I'm saying doesn't make sense, uh, pause the video and look for the specific part that I've covered the concept in, cuz that's basically the purpose here. I just want you guys to be able to reaffirm your knowledge and show you how now you can do something pretty cool. Okay. is the first thing I'm going to do is I'm just looking for a Gmail trigger there. I'm going to select my credential. Sorry, not create a new credential. I'm going to select my credential, Gmail account 3. And the way that this module or node works is it extracts emails from my inbox that match my specified filters. And it does so in the timing that I give it. So every minute, hour, day, week, month, x, custom, whatever. I'm just going to say once a day for now. It's going to be zeroth hour, zeroth minute, simple stuff. The event I'm looking for is message received. Then what I need to do is I need to add a filter down here. And there's one called search where I basically just look for emails from SOS. Luckily for me, they're all formatted in very similar ways. SOS media queries. So if I want to get all the emails from SOS, this is just what I do. Now I don't just want to get any email. I just want to get the specific email just to show you guys what I'm working with. Later on, we'll then um we'll separate it. So it's just SOS media queries. It'll work with any of them. But for now, I just want to grab this one. Here's what you do. You just go subject and then you'd feed this in. This is a Gmail operator, so just use whatever same filtering mechanism you do for your own emails in Gmail. Uh, and just feed it in over here and it'll work fine. Okay, now let's test this out. Let's grab the data. We've received a ton of data. Uh, this is an object with 11 items inside, but this simplify is sort of working against us here. The Gmail trigger just natively always has simplify on. We actually want to get rid of this. So, I'm going to go to expression just like we know how and press and type in false. This is the same by the way as just turning this off. I just wanted to be clear that I always use the expression field. And now I'm going to get the actual data of the email which is way more as you can see here. Instead of whatever it was 12 or something, now it's or six, now it's 13. And uh the headers object has 27 items buried in it just in and of itself. The thing we're looking for is this text variable, which is the same as what we had before. Tell friends to join source of sources. It's always free. want to know how to strengthen your relationship with journalists blah blah. We're then going to pin this. So now we have access to all of this JSON in future nodes. And I think I got lost here with my Gmail trigger. And now uh let's actually go ahead and let's split this data. Let's basically get our data so that it's just a bunch of these. How are we going to do this? Just think about this. We got a bunch of text processing features available to us. We know a little bit about the JSON JavaScript and stuff like that, but what are some ways we might actually realistically be able to do this? Well, the way that I see it is the great news about source of sources and the previous uh service called Haro is that they just have the same like characters everywhere. So, they have a bunch of stars here. Then between every story is basically just like these underscores. So, underscores there, underscores there. So when you see a similar pattern like this, it becomes very easy for you to like process this in a noode tool using a term or a function called split where you basically just feed in a whole big string and then you just split it based off something that you want. So I'm probably I'm going to probably need to split this twice. The first thing I'm going to do is I'm going to split based off this up here and then is you know there's like this top section and then there's going to be this whole bottom section. Then after that I'll grab the bottom section. I'll split it based off of uh this probably. then I'll just be able to get like the individual sections. If that sounds like rocket science to you right now, don't worry. We're going to go over here, press edit fields. And what I'm going to want to do is for now I'm just going to go manual mapping. Click add field. This allows me to create my own variable basically based off of um you know the previous module. So I'm going to type above. Uh actually yeah, let's just call this below. And then what I'm going to do is I'm going to feed in where is this text. Ah, it's right over here. We'll go to the expression field. Then I'm just going to type dollar sign JSON dot. And what I want is I want text. Okay. So now if we open up this big fat editor here, we got all the text right here. Pretty sweet, right? We don't want all the text. We only want the stuff that is uh below this line. So I'm going to copy this. And then I'm just going to go over here and press dot. Now we have a bunch of functions. And I haven't covered all these functions yet. I will in the future videos. Um, but one of the functions that I use all the time is called split. Just press split. And all we need to do now is we just need to feed in the thing that we want to split it by. So I'm going to feed in what I just copied a moment ago. Okay. And now instead of just seeing the string, we actually see an array. And this is what arrays look like when they're output in um naden. It says bracket array and then colon space. And then we actually have the whole array here. And this array is split based off of wherever this was. So I think it's going to be split right over here. The last character before it will say information week. Okay. So we go information week. I'm going zoom way in. And yeah, that's what the array looks like. We have a comma. So this whole thing was a string. Then we have um quotes and then a comma, a space, and then we have another quote. And this is the beginning of everything underneath it, which is awesome for us. Okay, so this is basically what we get. Um, and now the really cool thing that allows us to do is it allows you to pull objects out of an array. So this is an array with two items inside of it. The one string that's everything above those lines, this little star line, then another string that's everything below the star line. We can just go dotlast. And now we'll just pull out the actual string itself, which is this. All of this. How cool. Now what else we could do is theoretically we could just split this again. We could split this again and then extract everything split based off of these characters. So I go dotsplit feed this in. Now we have another array, right? How many items are in here? I don't know. Let's find out. So I'm going to test the step. You click on test step. And now we have a bunch of different items. So yeah, just make sure you like set the array um here anytime you're screwing around with data. Otherwise, I believe they have a field like autotype convert or something. Um I haven't used that one before. type conversion errors or something. I actually just set the specific uh data type that I want. In this case, I'm creating an array. I'm splitting stuff to turn it into array. So, uh I'm going to be doing so with this array drop down. Okay. But anyway, now we have a big array. Pretty cool, right? And it looks like we have 19 items in total. What I want to do with this is first of all, I'm going to pin this. Second go over here to extract titles. And I'm just going to copy this because I don't want to have to rewrite the whole prompt. I think that would probably take like 15ish minutes or so once when all is said and done. Um, what I want to do is I want to um grab this data which looks just like this. And all I want to do is I just want to feed this into AI now. And I just want to have AI tell me, hey, am I good? You know, if I'm good, then go ahead and like draft an email. If I'm not good, then um then don't. And I know just from experience that this is sort of split into two parts here. So, I'm just going to copy this over. I'm going to use it to create my prompt. Okay. What does this prompt look like? You're a helpful intelligent administrative assistant. Very on brand for me. That's a system prompt. Then, hey, I'm a business owner specializing in AI, automation, marketing, and software. Then what I'm going to say is below is an email requesting a uh let's do an email request from a journalist looking for a story about sorry information about their story. Then just going to paste a bunch of data in Your task is to determine whether it is relevant to me and if so pre-draft an email that answers their questions using my tone of voice and I'll say casual Spartan. Some information about me. I own one second copy. Here are my links. Then I will say uh sorry I just got a lot going on here because I'm piecing this together between two different um prompts. But anyway, some information about me. I own one second copy a successful AI marketing company that doesn't came up. My name is Nick Raf here. My links don't use unless asked links. And then um below is a request by a journalist for outreach. Write a sinking spartan email responding to each query. says Spartan email be concise. Use the following format. Okay. Then I will say if it is relevant return a JSON object as follows. True. Email body. Email body goes here. If it is not relevant, return false for relevance and leave email body blank. Good. Use the following email format. Email template when responding to relevant inquiries. Cool. Make sure to respond in JSON. Very sweet. Okay, great. And now all I'm going to do is I'm going to provide as input um the specific item that I am referencing. So what you'll find is when you're referencing a an array like this, what it'll do is it'll grab the specific item of the array. So JSON. blow zero. I don't actually want that. Um what I want to do is I basically want to loop through this array and then for every item I want to feed this in as input. Um this is selecting the first item here with the zero. Um everything is zeroth indexed. So this is 0 1 2 3 4 and so on and so forth. Um so in order to do that we're going to have to take our data and do just a little bit of pre-processing first. Um we're going to want to combine a field from many items. Sorry. uh turn a list inside items. So you're using the split out node. Don't believe I talked about split out, but rest assured all this does is it turns an array into a bunch of items. So you can run them one by one. Okay, great. So below no other fields, and now we have uh 20 items on the right hand side. And now basically we can feed in every one of those 20 items to AI. So all I'm going to do now is I'm just going to feed in uh this below field. And now instead of me having to go JSON items. below below zero, below one, below two because we are now splitting this into a top level array instead of before how it was an array and then the curly bracket below and then another um array with like 20 records. Now it's just below below below. It's going to run basically once for every item that we've received. So yeah, that's that. Um we should be able to get some JSON here. Uh I don't want to run it on all 20 as a test though because all we're doing is testing. So, what I'm actually going to do is I'm going to go in between these and type in limit. Limit allows us to restrict the number of items. So, I actually only want to run this twice to start, and I want to see what happens. So, I'm actually going to do this on two items. Then, we're going to see what those two item outputs are. And if they're good, then we'll continue. We'll pin them and then move on. And if not, then we won't. We'll be able to modify them before we actually waste 20 uh tokens worth of data. And I'm just going to pin all the data moving forward. And then I'm going to go over here to limit. And then I'm just going to press test step. Okay, I'm going to pin this now. So now we have the two items. So I'm going to go over here and then I'm going to test this step. Looks like we are now producing. And it looks like both of these were false. So I'm going to want to up this limit just a little bit. Maybe we'll try three last items instead. We'll overwrite the data that's pinned. We'll pin it again. I'll go back to extract titles. And now we're feeding in the last three enterprise genai users. Yeah, this is probably me. edge AI and stuff. Looks like they kept their AI entries at the end, so that makes sense. And we're now doing three API calls. It looks like uh there were two TRS. So, true up here, true up here. This looks like it was just junk data. So, we could actually cut that out. Show you how to do that later. Um, but now we actually have like emails drafted. Hey Pam, I own one second copy accessible blah blah. Here's a big answer to all of these questions. That's pretty cool. All we need to do now is we just go Gmail. We draft. Oh, you know what? Uh, we need to grab the email, don't we? Yeah, I don't think I actually grabbed the email address of the person. Yes, I did not. So, let's actually change our prompt a little bit and let's edit it so that we actually output the email, too. So, I'm going to go back here. I'm actually go um email address um discovered or let's just go their email. Okay, there you go. That should probably be sufficient. Let's actually test this one more time. So now we should actually extract their email address as well, assuming the AI does what I what it's silly human overlord tells it to do. And now that we have the email um address, we'll actually be able to like use that to feed into a draft. Yeah, I kind of forgot about that. So we'll go back to Gmail. I'm going to go draft create a draft credential Gmail account 3 resource draft operation create. Then I'll say re and then I'll say SOS inquiry. I'm go message and then all I'm going to do is I'll go back to my schema. Just going to drag my email body in here. Then I'm going to add an option called uh now we'll go to email and then we're actually just going to feed that puppy in there. And now we can actually test this out on three. So let's go one, two, three. So you should have three items. All of them just wrapped up. We go back to my email inbox, go down to drafts, we'll see that uh I created one for each. It looks like I created an additional one, but anyway, I'll cover that in a second. The first was this one to this lovely Pam lady. Very nice. Second was this other one to this lovely John fella. Very cool. Uh, everybody followed my email template. No issues. And then, yeah, looks like we just used one additional um email. I think the reason why we sent that additional email is because we technically outputed an item. So, I wonder if we could just not output an item. That would be one way to do it. Yeah, we could just not output the item. Or, you know, we could just add a filter like I uh was showing us how to do so before. So, if uh let's just go down here. Let's go JSON dot Let's go item three. We'll go down to JSON. relevance message. content. relevance. So basically if this is equal to true then we'll continue and then if not we won't. So we should get two, right? Yeah, there you go. We uh kept two items and now we're only going to be sending emails on the two items that passed our filter. So, just because I always like to do an end toend flow, I'm just going to discard some old drafts here, delete everything, and then run this one final time just to show you guys what all this looks like. We will use the Gmail trigger, edit the fields, split them out. I'm going to have the limit be three items just for now because I don't want to like draft a bunch of emails. Extract the titles, add a filter, and then create three drafts. Let's run this from start to finish. We're now extracting the titles. And by sorry extracting the titles I mean we are um filtering and creating an email. And then the end result is we have two drafts in our inbox which is kind of neat. And if we wanted to take this even further what we could do is we go down to Gmail and then we could add a label to this message. Just to make it abundantly clear if I select the message ID that we just created. It's going to be right over here. Uh I could just call this public relations or something. And now basically it'll just automatically apply a label to these drafts inside of my inbox that I just know that these are, you know, these are public relations inquiries basically. Um these are not, you know, other email drafts for some purpose. And that's kind of cool of a flow, but I'm actually just going to stick that right over there. And then because I'm already getting mixed up with the extract titles, I'm just going to say filter and respond to email or maybe create email body. There you go. A little bit simpler. So yeah, we used a ton of new functions here. Um, we use the split. I then use the split out. Uh, I don't actually use all of these super often. We did use the filter, which I talked about and covered. And then the limit is just basically like an internal tool that I like to use to uh make sure that I'm not screwing around with token usage or spending a ton of like executions or anything like that. Um, I think this system is a lot cleaner than that other system that I was going to build with you guys before. It's also a little bit more simple. Here I have just a ton of code unfortunately, but we'll get rid of that. We'll use this as the two templates for us. Then I'll also just make this nice and easy to see. And yeah, I hope you guys appreciated this. Um, just so that we could do a quick recap and because in my experience doing a recap of the stuff is sort of how you remember it. Um, we started off by talking a little bit about fields specifically two types of fields. There were fixed fields and then there were um, expressionbased fields. I sort of made a case for you as to why you should probably favor expressionbased fields over fixed fields wherever possible because you could do the exact same thing anyway and you also just get like a ton of code options. So I always just toggle little expression. I then showed you how all of these different field inputs even like the little toggle buttons these really are just expressions at the end of the day. You know if it's a toggle button it's true or false. If you're selecting a Google sheet or something and there's actually an ID behind it and so now you know a lot more about sort of the underlying data and the way that nad structure stuff. I then covered JSON JavaScript object notation in probably pretty excruciating detail. Hopefully that wasn't too boring. But we covered a bunch of different variable types. Just to recap them, there was a string, there was a number, there was a bool or boolean, true or false. There was an array, and then there was also another object. So you could bury JavaScript objects inside of JavaScript objects. There were key names, values. We learned a little bit about the formatting with quotes, um, as well as commas and brackets and that sort of stuff. But the reality is if you just like stare at JSON long enough, kind of give it a good squint or two, eventually it'll start making sense. And that's why I've now changed all of the input and outputs inside of our NAN course tutorial to favor a JSON JavaScript object just so you guys could see it and kind of get used to it. From there, we covered a little bit about how data is represented in NAN. So specifically, all data, all inputs and outputs in NAD are structured as an array of objects. So there's some top level square bracket and inside there's just a bunch of those JavaScript objects nestled in it. And if you want to reference the most previous node, then all you need to do is just use this little dollar sign JS O N syntax. If you want to reference nodes from 2, 3, 4, or nodes back, n standing for whatever number, then you would do the dollar sign, but then you'd have to like specifically reference the name of the node. No big deal. Naden actually does a lot of that selection for you. And there's usually a little drop down or toggle button that you can just click to get there. We talked about how to reference earlier fields, how to do some backtracking with like dot notation and square bracket notation. Then I also covered some common gotchas. The most common gotcha in Naden, just to be clear, is people don't understand that it is an array of objects that you're referring to. And so NAD will run once per item in the array. If you're trying to reference one item, but you're really referencing all of them, obviously you're going to get an error message. It's not going to work. Likewise, if you're trying to reference a number of items, but you only reference some subset of them, you're going to have some error, a message, and you know, you're going to get that dreadful red text that I think we all hate so much. So, understanding that everything is just an array of items and array of objects. This goes a long way towards insulating you against that. Finally, we covered some foundational notes. We started off with the HTTP request node, which allows us to basically request or do the same thing that your browser is doing when you access a website, pull all of the code, and then I also showed you guys how to do cool stuff with AI, where you take the extracted or parsed components of that node, like my leftclick website, and then turn that into some sort of AI structured data, like a summary, like some interesting tidbits about that website, maybe the contact details if you could find it. I then covered a little bit about web hooks. If you guys are familiar, if you guys remember, web hooks are just like the glue that holds so much of the internet together. I showed you guys how to send a request from one workflow and receive it in another workflow. And I also showed you guys how to use a third party platform, in this case, ClickUp, but you can really use whatever the hell you want to send a request upon some triggers. So that now I'm like connecting NAND and ClickUp with my own native integration without even necessarily having to know too much code. And then at the tail end there, I covered OpenAI and AI agent nodes. The real value of N8N in comparison to most other noode tools in addition to its ability to self-host is their AI agent functionality that just works fresh out of the box. So rest assured, we're going to be covering a lot more of that moving forward. But we built a very simple example where I essentially asked my AI agent what was going on for the day and then it pulled data from my calendar intelligently while also still being able to like talk back and forth in natural language. We covered some nodes that modify flows as well, including the if, the filter, the merge, the split into batches or bundles. We covered the split out and then finally we also covered while I was building that last example the limit. And then from there I showed you guys how to build a super simple and easy essentially email autoresponder type flow but one that does a very high leverage highv value business purpose which is parsing out defining whether a journalist inquiry is relevant to us or not than if it is actually writing out an example based off of a template. Awesome. You guys now
Asset-based AI Lead Generation
have a solid understanding of NAND fundamentals, including how nodes connect, how data flows through workflows, and the core concepts that power more or less every automation that I'm going to build. Hopefully, you've moved from a complete beginner to somebody that now understands the NADM platform at least somewhat a technical level. So, it's time to put those foundations to work and build our very first high-v value system. We're now going to be building an assetbased AI lead genen system. An assetbased system just means we're going to scrape prospect data. We're then going to use that to create customized lead magnets like a personalized newsletter or report and we're going to deliver them via an automated email campaign. The reason why this approach is so valuable is it generates 5 to 15% reply rates cuz you're giving value before you ask for anything. Let's dive in. So today I want to build an automated asset generator. I don't entirely know exactly where I'm going to start or how I'm going to do it, but this is a road map of my tenative thoughts. I built enough of these systems at this point to have a pretty good understanding of what I think I'm going to do, but as I've mentioned, I'm going to leave in the detours and any stumbling blocks along the way so you guys could see what an actual build process looks like. So, here is what I'm thinking is going to be the road map. And you know, in case you guys don't know, when I say asset here, I really mean like anything. You could generate like PDF slide decks, you could generate onboarding documents, you could generate newsletters. That's what we're going to be generating today. You could generate cold email sequences. Like the value of an automated asset generator is it just like gives a ton of value to somebody right off the bat. And you can also use it in your own business to generate stuff you want. Here's the mindset. So we're going to start by getting LinkedIn data. Now LinkedIn for those of you guys that don't know is what's called UGC. It's user generated content. So this is my own LinkedIn profile. All of the data on this page is data that I have written essentially most of the data. Anyway, the value here is that means that I've written most of this in my own tone of voice. So if somebody does something with it, if it uses if they use it to create an asset or something, I'm a lot more likely to find it valuable. Okay, so that's where all of this rests on. We're going to start by scraping LinkedIn data. The idea after that is, let me just skip ahead. I'm also going to scrape their company website and we're going to get the data of their website from LinkedIn. Like we're going to get the URL and stuff like that. Once we have the LinkedIn data and the website, we're going to feed this into AI and we're just going to generate a summary. Then with this summary, what we're going to do is we're going to generate a customized newsletter using that data, convert it into an HTML. Then we're going to create a Google doc with it. And then ultimately, you can do whatever you want with you. You could send it to your prospects through email. You could fax it to their address. You could print it out and literally send it to their place of business. You could do whatever you want with what I'm about to show you. That's what's really cool about it. And because cold outreach, for those of you that aren't aware, reaching out to people that haven't opted into some sort of marketing communications with you, because cold outreach is getting more and more saturated, this is a very high value way of raising the bar and making it seem as if you spent a ton of time and effort creating something valuable for your prospect before you've even talked to them, which they tend to really like. So, all of this is just in pursuit of really high reply rates on an email. And let me yeah, let me walk you through um how all that works. So yeah, we're going to use LinkedIn to be the primary data source along with the website of the prospect. So this is my own website. Then I want to combine these, pump these into Apollo. Apollo is a way that we could scrape mass lists of LinkedIn profiles. And then instead of me paying for Apollo, I'm probably going to use this Apify scraper, which allows me to scrape Apollo, which scrapes LinkedIn. This just allows me to get it for a lot cheaper. So $120 instead of whatever the heck Apollo is charging me. The end result is going to be something like this. This is a newsletter that I generated using a similar approach a while ago. Design meets real estate. the Evermore approach to bespoke living. The idea is that we created a newsletter for them called the Evermore edit. You know, we created topics. We wrote said topics and, you know, we talked about the founder of the business and the realtors and anyway, it's just super customized. It's super perfect for them. And in our case, maybe we're like, hey, you know, I want to write you customized newsletters, super high quality. Um, you know, so I actually just went ahead and I wrote one for you. So, here it is. Okay. Hopefully everybody here's on the same page. Let's now get into actually building the system with an AD. So, the very first thing we need to do is we need to grab a LinkedIn data and we need to feed that search into a scraper. I'm going to be using Apify for it. So, what I've done is I've already assembled a giant list of people that I'm interested in working with um through Apollo. The way I did this was I added some filters up here at the very top. I wrote creative agency. Then under job titles, I wrote founder, partner, and co-founder. And then you can also add some additional filters to like constrain the location. In my case, I just wanted to get this up and running quick and dirty, so I didn't constrain the location. What we get now is we get a URL up at the top of the page. What I'm going to do is I'm going to go to my Appify scraper by Code Pioneer. I'm just going to paste in the URL and then I'm just going to click save and start. The reason why is I just want to verify that this works on Appify first before I try putting it in NAN. Okay, sweet. So, we're now done with this. If I go to the output, you'll see there's a bunch of information here from our search. So, this includes the person's location, um some email addresses, there's also names and so on and so forth. So, we basically have all of that LinkedIn data including like the title and then the industry and so on and so forth. What I want to do now is I want to get that inside of NAND. So what I did is I just went over to their API and then I found this endpoint called run after synchronously with input and return output. What's really cool about this is in NADN. You can just copy over the curl of any API request and then you can paste it in. So that's what I'm going to do. HTTP request here. I'll go import curl. Paste it. And then what this will do now is it'll actually map all of the fields for me. So I don't actually have to do a ton of work. There are a few fields obviously I still have to put in. So bearer token over here. I see some body content type stuff. So, uh, let's sort that out. Now, actor ID, that's just going to be the URL ID up here. So, that is my actor ID. I'm just going to copy and paste that in here. And then underneath that, I see accept application JSON. Oh, authorization bearer token. In order to get that on Appify, I'm just going to open up a new Appify tab. And then I'll go right over here. API integrations. Create a new token example for N8N asset generator. And then let me see. I'll click create. So now all I do is I copy this. I go back here. And then I need to feed in one more thing. If I go back to my actor and then I go here. If I go click JSON here, I actually have access now to the input that I fed in JSON notation. And it looks like I'm not entirely sure, but I think in order to feed this in, I just need to feed this in that whole JSON string in somewhere here. So, I don't actually know. We're going to find out. I think body content types JSON. Specify body using JSON. I think I just paste this in. I don't know for sure. We're going to give it a try. We're doing this live. And then, yeah, I just click execute step. So, what we did a moment ago is we just verify that we can actually get the data we want from LinkedIn on Appify, right? Well, now what we want to do is we want to verify that we can execute that appy scraper inside of nana and then retrieve the data. The reason why that's important to me is because I just always like to verify like at every step that what I'm doing is directionally correct. So what I'll do is I'll just test every node one at a time until the end and then assuming that I can get this data then you know I can sort of check mark the first step in our road map. So a quick and easy way I can verify whether or not that is actually running because it looks like it's running. I haven't received an error code. I just want to verify. I can actually go back to Apollo, go to runs. You can see that there's actually a run that is currently occurring. How cool is that? So, this is actually now outputting email addresses and stuff like that, which is crazy. This is the input that I fed it. So, that looks fine to me. And yeah, you know, more or less this is now currently executing. And once it is done executing, we will hopefully receive all of the data here. Okay. So, I just pinned this and executed it. Then I renamed it as well. And just taking a peek at the output here, you see we get JSON with the first name, last name, LinkedIn. We get all the data that we need. The reason why I'm pinning this, aka I like selected it and then I pressed P. Um, you can also rightclick and press that button as well, is just because it's going to allow me to execute future runs much faster. I don't actually have to wait that whole long process, which I think was like four or five minutes for 500 items ever again. But anyway, so moving on, we verified that we can do that first section that we want. So where's my road map? Right over here. We can get the LinkedIn data and we can feed it into a scraper appi. The next step we need to do is we need to scrape the company website, right? How am I going to do this? Let's just move this over here so it's right by my notes and I can just jump back and forth really easily. How are we going to do that? Well, a quick and easy way is obviously just using the HTTP request node in NAN and then just feeding in the website. So, if I go to website, see there is a website URL and it looks like this is the same lead that I scraped when I did this. So, the Apollo search must just be the exact same one over and over again. But anyway, let me paste that in. What I want to do now is I just want to like call the website with an HTTP request. If you guys are unfamiliar like what an HTTP request to a website is, is this website here, leftclick. ai, I could actually just run this independently using this by like hard coding. Okay, if I just delete all of these and then execute this one manually here, what happens when I click execute workflow is this is actually outputting all of the HTML data from my site. Now, this may not seem like it means anything to us, but check this out. growth systems for B2B companies. Let me command F growth systems. Do you guys see how the exact same thing is written both on my website and then represented inside of the code of my website too? So all of this HTML, this is just my whole website. It's just written for machines, not people. So all we need to do if you think about it logically is we just need to convert this from something that is machine readable into something that's more human readable or I guess AI readable I should say, which is a machine. The lines are blurring every day if you guys couldn't tell. The issue is um notice how like here I'm outputting 500 items. Do you guys see how it says 500 there? If I were to just run this, I'd basically have to run 500 HTTP requests until I see the final result. I don't want to do that. What I want to do is I just want to run it like one at a time. One at a time, And then I want to see what the output is. And I want to verify that things are okay. I think a lot of beginners make this mistake. They try and run off of full data sets. Don't do that. Instead, just let's think to start like some of these aren't going to have a website, right? So, why don't we filter these and we'll just make sure that like we're only going to operate off of things that have websites. So, let's go down to website. Let's say website URL has to exist. And let's also say, you know, I don't know, maybe I only want to do this for people that have email addresses. So, email address also has to exist. And now what I want to do is now I'm going to execute this. Uh, hold on. Just going to delete this here because I only want to run this uh last one filter. Let me just execute this and let's see how many of the 500 are we actually going to get that have both websites and emails. 247. Okay. So, I'm going to pin this. And now what I want to do is I actually just want to test. So, I'm just going to use a limit node. And I always use limit nodes. And I'll just do two. The reason why I always do two is because if you do one, sometimes you can't fully figure out like array logic. So, you need to do more than one. But if I do more than one, then I'm just doing a bunch of HTTP requests unnecessarily. Okay. Now, I'm going to feed this in. And what's the output of this limit node? You know, it's the first two leads basically. Okay. So, actually, maybe we should do last two so I'm not just regenerating the same thing I showed you guys earlier. Let's do that. Let's unpin this. Let's execute this workflow. This is now going to output the last two. Okay, cool. And then I'm going to feed it into my HTB RES. And I'm just pinning every node as I run it one at a time. And this is really valuable for me because um again, it just allows me to test really quickly. So now what I want to do is I want to feed in the website URL. Just going to feed that in. Um I don't think I'm going to make any changes. I think that's probably fine. Now I'm just going to click execute workflow and let's see what we get as a result. Going to give this a click. Yeah, let's display the data. This is a fair amount of data, so it doesn't want to display it for me if it doesn't have to. But it looks like we're now scraping this website. Um, showandtell. co. cza, which is cool. So, showandell. co. ca. Let's see what it looks like. This is really clean. They're a premium content partner, Cape Town, South Africa based. Wow. Very cool. This is a fantastic example. So, I'm glad that we're scraping them. Cool. So, now that we have the data, if you think about it right now, this is an HTML format. It's really long. Do I want to load 2. 2 megabytes of data every time I'm doing this? Probably not. So, what I'm thinking of doing is now that we've scraped the company website, we have to feed into AI, but it's just so big. I don't really want to spend all this money on tokens. So, what I'm thinking I'm going to do is I'm going to go markdown. And there's an HTML to markdown here. So, I'm just going to feed in the HTML. What this is going to do is you see all the tags, you know, the less than symbol, exclamation point, doc type HTML, back slash. I can actually just remove all these with this markdown node. And then it'll only output text in this format called markdown, which is a lot easier and simpler. So, this is no longer 2. 2 megabytes. This is actually pretty simple. And what's cool is I think there's email addresses on this page. Yeah. So, we could actually scrape the hell out of this ourselves if we wanted to. Okay. But anyway, so now if you think about it, what is the output of this? It is a bunch of data about the specific website. It's just like this. And now we have it in a format where we can feed it into AI and have it tell us something about this website so we can customize the hell out of our outreach. So, let me just double check. How far are we down the road map? Yeah. Okay. So, it's time to feed into AI to summarize leads. So, how are we going to do that? Well, it's pretty simple. Now that we're done with all these damn HTTP requests, we just go to OpenAI and I'm going to do message a model. And first we have to do our authorization. So give this a click. What you have to do is you have to head over to the OpenAI API site and then you have to grab your API key. Now I'm not going to share my API key with you guys, but I want you to know it's really easy and ADA has awesome documentation for this. It actually shows you how to do this. You just go to the API keys page here and you see at the bottom lefthand corner of my URL says http platform. open. com/appi-keys. That's what you feed in here. You don't need to do the organization ID, at least not as the time of this recording. Once you make your connection, now you actually have access to the open AI API. So, hello, how's it going? Let's ask. I'm going to click execute step. Now, this is actually going to ask the model. Hello, how's it going? Uh, what's the issue here? I don't actually know what the issue is. Oh, sorry. We haven't picked a model. My bad. We got to pick the model first. Let's do chatb 4. 1. That's the current model that I'm going to use. And now I'm going to execute the step. and we should get something should tell us hello I'm doing well hello I'm here and ready to help. So we ran this twice for two items and it gave us two different um outputs, right? So now what we have to do if you think about it logically, if we want to make this output like something that's relevant to us, we have to take all of that website data, we have to give it to AI, but we also some instructions. So what I'm going to do next is I'm going to write a quick and simple set of instructions that you guys could use in order to have AI convert things into a format that you want. Make sure to output content as JSON. There are a million ways to do this. Um what I'm going to do is I'm going to basically say, hey, at least I think I'm going to do this. Hey, can you summarize what the website is about? give me some unique information. From there, maybe I'll combine it with the profile history as well. So, what I always do is I always start with a system prompt that says you are a helpful intelligent assistant. I just do this because I think the model ends up smarter as a result of this. Next, I do a user prompt. Now, user prompts are where you basically tell the model exactly what you want it to do. So let's say your task is to take as input a bunch of unstructured information about the client's website and linked in profile and convert it into a JavaScript object notation a JSON output. I may adjust this wording a JSON output of the following format. Let's do one called website context. person context. And let's do one called, I don't know, unique angles. I'll just go with unique angles for now. And what's the idea here? I just want to take this massive string of both the LinkedIn profile data, which I'll show you guys in a second, and then the website data and just turn it into something simple for me so that I could feed it into the newsletter generator, which I'll feed in after this, and then have it like do something with. Okay. So, I've also just written a bunch more instructions. So, you receive all the data. You need an unstructured string. You're ask to parse that out, turn that into the above object. uh go deep into detail for website contexts. Write at least two paragraphs for person context where use all the data available and for unique angles. Use all the website and information about the provided person to create three interesting points that we could write about in a later article. Return any new lines as back slashn. Okay, that seems pretty simple. Um what I got to do now is I obviously have to feed in the data. So I'm just going to go to my user prompt here and then I'm going to start like adding variables. So let's start with the website scrape. Website scrape is just going to be this markdown data. So, let's just do that. And then let's go personal data scrape. And then here, I'm just going to start feeding in a bunch of information from the LinkedIn profile. So, where am I going to get that? Probably from the limit node. So, I'm going to go like name here. Let's just work our way down, right? Like title. That seems pretty valuable for the AI to have. Let's do headline. So, maybe we know that we're addressing the person. Okay. And I just added a bunch more. Email, state, city, country, whatever. So, um, that should now be everything that we need in order to actually have this run. So let's execute this now and let's see what happens. Just double checking my data is the same. Let's do temperature also and let's go 0. 6. I just like to have like lower temperature in general. And let's execute this. Let's see what happens now. Okay. So we're now feeding it in that data from the markdown node. It's running which is great sign. Let's go open AI. Nice. Okay. Great. And let's see what do we get. Website contact. Showand tell Creative is a premium content partner based in Cape Town, South Africa specializing in stills. The company position solves as broader impactful. Wow. This is really cool. So, it gives us all of the data. It gives us a ton of data actually. Gives us some context about the specific person. Then it gives us unique angles. This is what I was most interested in about because I want to use these in order to generate the newsletter. The impact of running a full service content production agency is solo founder. How Kevin Sawyer manages every aspect. How he does this, how he does that. Okay, cool. This seems pretty valuable. You can see we did the same for another agency called Craft and Slate. These always have such interesting names, but then again, that's creative agencies for you. Um, now let's take this data and use it to generate something. So, I'm just going to duplicate this. Paste this in. And you'll have to bear with me here. Um, my prompt engineering a little bit rusty. I haven't designed a system in a couple of weeks here. But I think what I'm going to do is I'm going to have it generate a newsletter. So, your task is to take as input information about a website and a person and then return. Let's do customize newsletter. a customized newsletter that contains maybe we'll say customized newsletter that will act as a lead magnet to get them to want to purchase my stuff. Let's just go with that. Okay, this looks pretty good to me. Now, I want to give it an example of a newsletter. So, the reason main reason I picked this example is cuz I knew it'd be easy for me. But um I used to write a newsletter back in the day called the cusp where I basically had AI helped me write this stuff way back in 2022, but it was just a whole newsletter all about like AI and automation and how AI is changing the economy and stuff like that. So what I'm going to do is I'm actually just going to copy my own newsletter format like verbatim. And I'm just going to paste it in. Let's go back here. Let's paste it in. That looks pretty good. And I'm just going to use this as like the format. I'm going to say return the entire newsletter in markdown using this JSON. And then okay cool we should now return it in title subheading and newsletter body format. I did some minor adjusting to this. Um but anyway this is what this looks like now. This is the prompt that we are now giving it. So what do we actually want to do now? We just want to feed in that object that we gave a moment ago. Website context person context unique angles. Okay. So let's delete that and let's just say website context person context and then unique angles. And then what I'm going to do is I'm just going to go expression and I'm just going to drag this in. Website context here, person context here, then unique angles here. Awesome. So now I'm actually feeding in all the data. Outputting content as JSON 0. 6 again. And this is just happening cuz I duplicated this. So all the settings are going to be the same. I'm thinking we should probably rename this. So this will just be like generate website context or generate context, let's say. And then this here will be like generate newsletter. That seems smart to me. And then yeah, I have everything I need to actually just test this again. So let's click execute workflow. Let's see how it works. Execute and workflow. Cool, cool. Okay, cool. Now let's take a look at the output. Content that gets remembered. The show and tell creative approach inside the solar journey of Kevin Surin and how Cape Town's boutique agency is reshaping media production. Then we have the newsletter body. Wow, this is great. Well, I actually don't know how great it is, but we're going to see in a second. Um, now the rest of what I want to do here hinges on Google Docs cuz n Google Docs is kind of hard. But to make a long story short, what I'm thinking of doing is we're going to take this output and we're going to convert it into HTML because HTML is the format that Google Docs natively uses. And then we're going to try generating a Google Doc with it. This may require a little bit of finagling with like the Google Docs API spec, but I'm confident I can make it work. I've done it before. Basically, so yeah, like the markdown stuff is cool, but I want it to be in like a sexy format like this, right? So, you can't get a sexy format like this through markdown. Unfortunately, you have to do HTML. So, what I'm going to do is I will take this and then I'm going to output another markdown converter. We'll go HTML to markdown. Now, we're going to go markdown to HTML. Feed that in. Now, this should give me just a bunch of markdown that I could use, which is what we want. Come on, little markdown node. I believe in you. Okay, where is this data? As you guys can see, I use the JSON. Okay, so here it is. It's in a data object. So H1, they even add some ids which is really interesting. Very cool. Okay. And then now I want to do a Google Docs. So Google Docs here create a document right over here. Now I already have a Google Docs credential. If you guys don't, you just click create new credential ooth 2. Then there is one additional step you have to make you have to take which um you guys might not have already. What you have to do is you have to go to your cloud console Google account and you have to pump in the client ID and the client secret. if you've never done this before and it has awesome guides and breakdowns that will help you do that. There also a bunch of videos that other people have actually posted going through this whole process. I've already created a Google Cloud Console project, so unless I make an entirely new one for a new Workspace account, it's not going to be the cleanest. I don't really want to just make a bunch every time. But what you can do is you can create a Google Cloud Console project by logging in and then selecting a new project, adding a location, and then once you've created it, you just enable Google Docs as an API. Then you request API access and then get your little um OOTH token. Um the really cool thing about NN is they just walk you through how all this stuff works which uh makes it significantly simpler for beginners. What I've done obviously is I've already created one. So now that I've created one, all I have to do is I have to like add some location. Then what I want to do is I just want the title to be something really simple. Let's just go for let's add the person's name. So for Kevin. So I'm gonna say hey this is for Kevin. Maybe like newsletter for Kevin. That sounds pretty cool. Okay, so um what I want to do is I now want to create these. So I'm just going to pin this previous output. I'm going to create it. And the really annoying thing that you can't do natively in Nadens's Google Docs nodes is I don't believe you can like create documents with HTML. So what we have to do is we have to split into two steps. We have to create the document first. Now we have a document called newsletter for Malcolm and newsletter for Kevin. I'm just going to pin these. And now what we have to do is we actually have to update that document with HTML. So if you've never done this before, HTTP request node and then what you have to do is under authentication go predefined credential type. All right, the credential type is going to be what you just created a moment ago. Google Docs O2 API and it'll automatically populate. Then we want to send headers. I'll go JSON. I think it's content type text HTML. I'm not entirely sure. And what we need is we need a very specific API endpoint which I think is this one here. It says upload file data. googleis. com/upload/drivev3/files upload type equals media. So I think this is the endpoint. We're going to give it a try in a second. The last thing I have to do is go patch. Then we got to send the body. We're just going to go raw and then text HTML probably. Then under body we're going to feed in I guess the HTML that we just generated which will be here. Oh, okay. Right over here. And I think this is probably it. I'm not entirely sure, but let's just give it a go. Screw it. Okay, so I got the data. Don't know if this is right. So, what am I going to do? I'll just go Google Docs and see if there's newsletter. Oh, there is. So, let's watch the one for Kevin. Ah, nice. And now we have our newsletter. Beautiful. Okay, let's actually read some of this. Welcome to the insiders lens. That's what we're calling the newsletter. A fresh look at how impactful content is made and the people behind the camera are redefining what it means to be seen and remembered. In this issue, running a full-ervice content agency as a solo founder, Kevin's Playbook: The Art and Science of Unforgettable Content and Navigating the South African Media Scene. Meet Kevin, the founder and driving force behind Show and Tell Creative based in Cape Town. Kevin wears every hat. Creative, director, producer, liaison, and post-production specialist. Do you guys know notice how like I knew none of this in generating the newsletter. None of this data was known to me. We just pumped it all into the system. Pumped way too much data into the system, I should say, and then just let AI figure it all out. And now AI actually has all this information. like the fact that Kevin is a creative director, producer, client leazison, post-production specialist, end to end ownership, right? Um, you know, you could see this being pretty valuable as like an internal newsletter. We even have like the email address. We have everything. So, I mean, you know, obviously some ways you can make this sexier. We could add some spacing. more, I don't know, we could add images. We could add links here. As you can see, it already hyperl in an email. But, um, you can take this into a million different directions. My goal was just to show you guys how simple it is to get up and running with an actual asset generator. Okay, so what I want to do now is yeah, like if you wanted this to run completely autonomously, it'd probably be difficult without adding some weights. So you see this like limit node here. We're just doing two at a time. What you can do instead just to make it like run kind of a lot more autonomously is you use a loop over items node and you just set the batch size in like this. The replace me loop. This isn't going to be done. What a loop and batches does is actually allows you to run instead of like right now we're running kind of like all of these simultaneously. What the loop and batches node is going to do is just going to run like one at a time and then you could add like a weight. I don't know, let's say you do this uh and then for every person you wait like 5 seconds and then you loop back. This is a pretty simple and easy way that I've seen people get around rate limits and whatnot. So maybe I'm going to add that in. The thing is once you're done obviously then um you know you would attach this top route. So, I'm just not going to attach anything in the top route and I'm just going to use the loop and batches node to like give me some peace of mind. Then I'm going to increase the limit here to let's say 10 items. Going to unpin this. And then I think we can probably just execute the workflow. No. Yeah, it looks good. And now we're waiting and we're just going to do the same thing over and over and over again. Oh, I think I'm realizing now I've just pinned all these. So, it's just generating the same thing one more time. Let's just undo this. Uh, unpin. We're going to have a bunch of data issues because I don't think it's actually been filled in. Okay, there we go. Now it's actually generating the context. And notice how it's going to do two API calls. It's going to do, sorry, it's going to do a couple API calls, but two API calls to open AI. This one here, this one here. Then we're going to do an API call here to the website. Well, I guess this is just an HTTP request. And then two Google Docs API calls. It's going to be two to OpenAI, two to Google Docs, and then we're going to wait 5 seconds. And I think the 5-second wait is going to give us enough time to like never have to worry about hitting rate limits. But obviously it depends on like the frequency that you hit and also your tier. And this one you should be totally fine cuz just an HTTP request. And then yeah, we're just going to cycle over and over again. Wait 5 seconds and just do it until the end of time. You can increase the batch size however you want. I've just done one here, but you could do two. And then yeah, let's see what this next one was. Newsletter for Adam. Let's just go back to my other account. Let's see this one. The bright age advantage creative campaigns, measurable results inside the agency where data meets design and clients come first. Now, I mean like if I were to actually make one of these things, as in actually send this to clients, I would probably ask the model to do this in slightly less of a syopantic tone. Like the way that it's written right now is sort of like, hey, look how great our agency is. Uh, you know, obviously it depends on the agency. There are a lot of like big PR companies that actually want to write in a way where it's like hey here's what's going on at you know Bright Age dispatch or something like that like here's weekly newsletter talking about what our company's up to. In my case you know I think that it's better to write a newsletter just like hey here's a bunch offormational value that you get from people and know how to do the thing. But you know if you wanted to adjust that you would just adjust the prompt that we used right and then if you wanted to change like the output format. So maybe instead of like a Google doc newsletter you did some sort of like slide deck or something. Well, you know, instead of doing the Google Doc generation, what do you do instead? You just do the Google Slides generation. You create a template with variables in it and then just like have it automatically generate that. I basically tried to take as simple as an approach as humanly possible here with the website context and the person context. But I want you guys to know you can scale this up to whatever the hell you could do. Website context, company context, lead deal context, person context, you know, boss context, uh, subordinate context. You could get so much information about anybody that you want inside of the company using this sort of approach and then weave it all together into a huge thing. Hell, maybe you're generating or charts. I don't know. Okay, so hopefully everything here I've said makes sense. Hopefully you guys see how this works. I'm obviously going to be including the blueprint or I guess NAN template down below so you guys have that as well. But yeah, uh really had it fun putting the system together for you and uh looking forward to seeing all the cool things that you guys generate with this as well. What
AI Custom Proposal Generator
we're going to be doing next is building an AI proposal generation system that creates professional proposals on demand during sales calls. This system takes basic client information from a form. Then we're going to generate fully customized proposals with problem statements, solutions, timelines, and pricing. It's all going to be in real time and seem very personalized. A automation agencies typically charge $1,500 to $5,000 for this system because it dramatically improves close rates and it also makes them look incredibly professional. Let's dive in. So from a bird's eye view, system is going to look like this. We're going to start by filling out a form. Okay, this is going to be called our trigger. Obviously, I'm doing this just on a whiteboard here because I want to be able to, you know, kind of express my thoughts a little bit better, play around with some ideas. I'm going to be building this with you guys as if I was a builder, not necessarily a teacher. So, I'm going to be showing you guys the various detours that I might go down. I'll show you guys my own thought process as I put a system like this together. You know, really the emphasis of this channel is obviously learning by doing. So, that's what I want to do here. But, we're going to start with this form fill trigger. From there, we're going to use AI to generate JSON. If you guys remember, JSON stands for JavaScript object notation. These JSON fields are going to be there going to be a lot of these JSON fields. Okay, but just to give you guys a quick example what this might look like, it might look like proposal title. Okay, and we'd obviously use this to fill in the proposal title segment of our proposal template. We might have, I don't know, problem statement. You know, I'm using camel case here, hence why the second word in a variable name is always capitalized. Feel free to call them whatever the hell you want. We'll do stuff like cost. You know, we might need to do some really quick formatting. AI is just a quick and easy way to like add commas in the right place, dollar signs, that sort of stuff. And then we might do things like timeline. And I'll show you kind of how all this works in a moment. But the important thing that I'm putting across from you is we're going to grab data from a form, but that's going to be sort of a simple like a dumb form. And what we're going to do is it's AI that's going to convert this simple dumb data into this super hyperpersonalized stuff that makes it seem as if you wrote it yourself, you know, and then sent it within a few seconds. So that's really where the value is, and that's what I'm going to show you guys how to do. After that, we're going to do API calls or built-in nodes, you know, whatever it ends up being. And I'll show you how to do this with slides to start. And then afterwards, I'll show you how to do so using a platform called Tandoc, which I like to use for basically every business that I work with. It's the business proposal platform that I recommend anytime I start doing consulting with a new company or automation for a new company. And the value of me showing you how to do this is I want to give everybody here a free option to do this system. But I also want to show you guys if you guys just, you know, pour gasoline on it, what the system can look like. Panadoc's great because you can actually send like an invoice alongside your document, which is just super valuable. You know, you cut down like three or four steps of proposal, agreement, invoice, right? All that bureaucratic jumble you can cut down to just one where you send the proposal, which includes a built-in agreement and includes a built-in invoice. And then finally, we're going to weave it together with, you know, NAD. And I mean, I'm going to be doing all the building in NAN here. Uh, let me just show you guys where I'm at right here. I was just verifying that some of these API endpoints connected and worked. But yeah, let's uh let's get started. So, I'm just going to call this thing AI proposal generator system. I've done this build multiple times across various uh noode platforms. Like I did this same build in make. com for people here that have been with me since the uh the make. com days. This is a super high-rise system, but I want to impress upon you right now that it's not necessarily like a complicated system. You know, one big trend I see a lot of people do on YouTube nowadays is they'll put together these extraordinarily complex looking things. Okay? Um, you know, like their system will kind of look like this. There'll be like some start node here and then maybe there'll be like some AI agent node and then there'll be like 5 million sub aents and every one of those sub agents will call like another 5 million sub aents and so on and so forth until a meteor comes and obliterates us for the second time. These systems, you know, they look really pretty, but I'll be real, they don't actually most the time drive a business outcome because they're a little too flexible. Most of the time if you want to drive money using no code platforms, you have to be a little bit more rigid. And this is uh at least in my experience the perfect mixture of the two. Okay. So for now I'm just going to add a manual trigger node. And we're just doing that because obviously we want to be able to test this. I'm just going to call this test. Kind of a couple options here. I'm going to start with the Google Slides approach. But you know we'll quickly segue into Panda do. Let me give you guys a quick example of what like a good Google Slides template might be. There's this one over here just called your big idea made to stick. So I'm just going to take a quick peek at that. And basically what I'm going to do is I'm just going to touch up a template like this. probably this exact one to be honest. Make a few changes to it. And then what I'm going to do is I'm going to replace this stuff with variables. If we jump into Google Slides, there's this one node called replace text in a presentation. So we're actually going to use that to like replace the various text fields. And so we want them to be very unique. I'm just going to wrap them in these double quotes which we're all used to for JSON formatting for convenience purposes. And these are the variables I'm going to be replacing with text. So yeah, you know, I have the proposal template over here. Let me touch this up really quickly and then I'll show you guys basically my ID and what this is going to look like. Okay, great. Just touch this up. give you guys an example of what a proposal like this might look like. So you can see they're pretty high quality. They look pretty sexy. I wrote one for my own content writing copy on Second Copy. So this is a hypothetical proposal for a lead genen system for one second copy. As you can see, we're going to customize the hell out of it. All of this is going to be AI generated. There's going to be nothing here that's human written aside from just some templated bits. I'll show you in a second, but lead genen system for 1 second copy is simple, scalable lead generation system built to help grow your content efforts and connect you with the right people. The problem right now, 1 second copy is struggling with an inability to generate qualified leads. The majority of our new clients are referral-based, which while always nice to have, is not scalable, nor is it reliable. Building an alternative strategy, one that allows you to take leads from cold to close, is vital to the health longevity of the company, and it's what we're going to help you with. The solution, after thinking deeply on things, here's what we've come up with. Number one, cold outbound lead genen. We'll put in place a robust cold email based system for you based on best practices. Let me just change that for you using best practices. Client reactivation system will build a simple but high ROI reactivation system to let you extract value from pre-existing clients. Best-in-class sales training will train your team with world-class setting and closing mechanisms. This is pretty similar, honestly, to a proposal that I would send when I'm selling cold email. Obviously, I do it in Panda because I can collect payment. But yeah, I'm not going to touch on everything for you. Just note that there's a scope section where we discuss specifically what the client's going to get. There's a timeline section over here where they could actually see the progress of the project. These are all going to be AI generated. And there's even like a little cost section over here as well as like a little thank you page and, you know, next steps, instructions, kickoffs, and so on so forth. So, what if I told you um by the end of this video, we'll be able to generate this whole system in like 5 seconds. We'll fill out a quick form. Form's going to include, you know, maybe 30 seconds of questions, a couple bullet points, then at the end of it, we're going to have this whole thing basically good to go. The value here is you could literally whip up a proposal while you're on the phone with somebody. And before you're even done the call, you can send them the proposal. It'll look extraordinarily smooth and sleek and really high-end. And this is just, you know, it's a great closing mechanism, but moreover, it's just a great way to learn an 8N, I'd say. So this is what it looks like actually instantiated. when we don't. As you see, I've used a bunch of variables here like proposal title, description name, one paragraph problem summary, solution heading one, solution heading two, solution heading three, short scope title one, short scope title 2, short scope title 3. Basically, what we're going to have to do is replace all these. Okay? And I just realized this is 2024, uh, I don't know, 2014 here. I'm just going to go 2025 for simplicity. But, you know, we're basically just going to have to replace everything within these little double brackets. And I'm using double brackets here cuz these are pretty unique, right? This is that Google Slides node from a moment ago. If we wanted to replace text in one of these, what's the likelihood we'd run into what's called a collision where we're replacing text in a variable that we didn't want? Pretty dang low, right? Okay, so here's the example proposal. This is what I'm going to be using. And let me just change the name to make it even more immediately obvious. Let's call it example proposal template. And let me just zoom out here and make sure we're on the same page. What I'm going to do is I'm going to just call this replace text. And you can see it's already called replace text. But I personally like to do this whenever I add a new module whenever I'm actually doing a live build because it reminds me okay like what is the flow? What is the sequence here? Otherwise I end up with 30 or 40 nodes I should say. That's the terminology and then you know like it's Google slides one, Google slides 2, Google slides 3. It just gets really annoying and complicated. So the first thing we got to do is we got to connect our Google Slides account. So I'm going to head over to create new credential. It's going to open up OOTH redirect URL in my case because I'm using the cloud hosted offering of Naden. But if you're unsure of how to do this, just open the docs. They're going to open a page over here and then go down to slides. Just command F finding it. Basically, what you have to do is you have to create a Google Cloud Console account. Go to API and services library and then you have to copy and paste like a like a scope code or something. So, I'm just going to go over to my other email address over here. And basically, in order to do this, I actually need to go and I need to create a new project. I'm going project here. I'm going to call this NAN. Let's just call it YouTube. I'll create. And as you can see, I already have some OOTH 2 client IDs for my new project. What I'm going to have to do is create new credentials. I believe it's a web application at least as of the time of this writing. Call it NN for YouTube. What we're going to have to do is you see where it says authorized redirect URIs and stuff. We're going to have to fill that in with information from NAN. So you see it says OOTH redirect URL. So we're going to go down here and then I'll go authorized redirect URIs. Paste that in. And then we got to click on that button. Okay, great. So now we have OOTH uh NN for YouTube, I should say. So what do we have here? We have a client ID. secret. So I'm going to copy this. I'm going go back and then paste it where it says client ID. Then client secret. I'm going to copy that. paste it where it says client secret. Now it'll ask me to sign in with Google. So I'm just going to open up a new little tab here. We'll allow NAN to do all this fun stuff. And the window can now be closed. Beautiful. We are now connected. I'm just going to change this to YouTube so that I know later when I'm building, hey, this is a YouTube credential. And voila, we are now connected. We're basically good to go. So the next step is we need to feed in what's called the presentation ID. The presentation ID in Google Slides is always just going to be this long string after back slash D slash and then before slashedit. So I'm just going to double click on this, copy it, then paste it in here. Okay, now I just want to test anytime I'm building a new flow, right? Just thinking out loud here. I always want to test and make sure that the node that I am operating on does what I expect it to do. So I expect this node to replace some text. But why don't I actually be sure? There's a match case button, page names or ids button, replace text button. I don't actually know how any of this works hypothetically. So, why don't I just try feeding in one of these variables. Clicking on this button and then uh I don't know, replacing it with something. Let's click test step. And it looks like something happened. If I go back to my example proposal, you did I change anything? No, it doesn't look like it. So, uh you know what happened here basically? Well, clearly there's some sort of gap between what I want to do and then what ultimately ended up occurring. So, let me refresh this puppy. Um, I don't actually know why that didn't happen. So, I'm actually going to do a little bit of debugging right now. Uh, this should just say hey. Oh, actually, oh, my bad. I've actually mixed these two up. Replace text should go over here. And then this should be hey. Okay, great. Let's test this step. Let's head back over here. Okay. Well, voila. Looks like I figured out the problem. Um, as you guys could see, you know, any buddy that tries to build things on a no code tool will inevitably run into issues. What's important is that you uh I don't know, you maintain a good attitude. So, it seems pretty simple. We have like a pretty good pattern here. All I'm going to be doing is I'm going to be pasting in the variables and then I'm going to be replacing the text, right? Cool. So, I've sort of verified now that the main function of my flow, which is the text replacement, that works. So, in my head now, I'm like, okay, let's actually work backward. Now that I've verified I can do the thing at the end, which is the important thing. I can like create a template. Logically, the next thing I do working backwards is I have to generate all of the AI stuff, right? So, that's sort of what I'm going to do in the middle section. So, what I'm going to do first is I'm going to look up uh open AI and then the specific node I'm going to want is just the message a model under text actions. Now, under credential, if you haven't connected a credential before, you have to click create new credential. Then, you have to go over to open AI's um API. You need to create an account if you haven't already done so. But you're going to have to go over to your OpenAI account, create an account, open your API keys page, create a new secret key, and you're going to have to add two things. what's called a uh well, we're going to have to add a name. So, I already have YouTube, but I'm just going to go YouTube nad and we'll just go Feb 4. It's important for me to show you guys how to actually create these keys, right? And then we just copy this. We head over here back to API key. Paste that in. And you don't need to paste in the organization I ad anymore. You used to have to, but you no longer have to. Just going to call this YouTube Feb 4. Save that. And then voila. We now have our second connection. Open it. The resource we're going to be asking for is text. Operation is going to be message model. The model we're going to have to choose. My recommendation for you is at the time of this recording would be GPT40. Um February the 4th, 2025. This just happens to be like the best combination of cost effectiveness and then quality. If you use something dumber than this, I find the quality of the writing will not be as anywhere near as good. Okay, now let's just kind of take a couple steps back here. Um, what I wanted to do is basically this, but obviously we're going to have to like, you know, we're going to have to feed it some information how to do this, right? So, keep in mind I haven't actually created the form that I'm going to be filling out yet, but I want it to be able to generate a bunch of this text. I want it to do so with like my tone of voice and so on and so forth. You know, logically, the simplest way that I can get it to produce the stuff that I want it to is by giving it an example of me producing exactly what it is that I want it to. And because I had the foresight to actually write out a flow proposal, I actually have all of the data that I want in order to train this model or, you know, in context train it's called. Um, which is where you just provide a bunch of examples to it. Okay. So, basically what I'm going to do is I'm going to use I'm going to have it output stuff in JSON. I'm going to use this as the variable name and then I'll just use this as the value in a training run and then I'll just ask it to do it again. And uh yeah, we're going to build the prompt that way uh pretty intelligently. And then at the end, you know, instead of it being super variable and like I don't know, having its own opinions on stuff and answering us with like I'd love to help you, it's not going to be like an agent per se. What this model is going to be is it's going to be almost like an API endpoint that we call or some service that we request and it's just going to send us back a beautifully formatted proposal or the data for a beautifully formatted proposal. So let me show you guys how to actually do that in practice. Um first thing we're going to want to do is create a system prompt. So I'll say you are a helpful intelligent writing assistant. This is just how the model identifies for the most part. I'm going to want to output the content as JSON and click add message. And next up we're going to want to add a user prompt. Now, the user prompt is basically we just say, "Hey, here's what I want you to do. " Okay? And that's all that the first user prompt does. What we do after is we add a second user prompt where we actually give it an example of one input. Then we have an assistant prompt come and give us an example of one output. And then finally, we actually feed it in our real live data and we actually ask it to like create something for us. So, let me show you guys what this looks like in practice. I haven't pre-written this or anything. I'm just going to show you exactly how I would write it if I were in the situation. And the first thing I want to do is I just want to give it the instructions. So your task is to generate a proposal using input data from a form. This proposal should be highly customized to the prospect considering we're going to be sending prospects. We go highly customized, specific, and high quality. Considering we're going to be sending it immediately after you are done, the proposal template we're using has many fields. You must return these fields in one JSON object. Use this format. Okay. Now, I'm going to give it a big list of all of the fields that I want. So, the first one is proposal title. So, let's go back here. Let's go proposal. Oops. We'll go proposal title. The next one up is going to be a description name. So I'll go description name. And again, this is exactly what I would do if I were actually building this out. The next would be one paragraph problem summary. Solution heading two, we go like this. Solution description And we'll also do milestone description one. So milestone description one. Okay. So we now have all of the fields in our JSON template. Beautiful. We've given it quite a lot. So, just to make my life a little bit simpler, I'm just going to go over to jasonformmatterater. org, paste this in, format this. This way, it's just going to be a lot easier for me to keep track of. And then paste it like with this nice new line format here. The value in this is now um I don't know, it just it's a lot simpler for me to see at a glance. It's a lot more maintainable. Um and yeah, the next thing I'm always doing or I started always doing about six months ago is I started just providing it a list of rules. So, use a Spartan casual tone of voice. I will say um yeah, I mean that's pretty much it. Use a Spartan casual tone of voice. Be to the point and professional, but professional you're writing. Assume you're writing to a sophisticated audience. There we go. Okay, great. So, this is these are going to be my instructions. Your task is to generate a proposal using input data from a form. This proposal should be highly customized, specific, and high quality. Considering we're going to be sending it immediately after you're done. Proposal template we're using is many fields. You must return these fields in one JS object. Use this format. This looks very clean. Oh, you I'm finally going to say ensure that all fields are filled out. Do not miss a field or leave any variables empty. Cool. So, that is our main user prompt. What we're going to do now is we're going to feed in an example of the form data and then output. So I'm actually lucky I already have an example of the output. If you think about it, the output is over here. And then I just need to go back in. I need to copy and paste all the output from the real proposal. So I'm just going to do that really quickly. Okay, just gave it a quick example of all of the data from that finished product. And now we just have to kind of think a little bit and figure out what sort of fields we want on our form. Uh because the way that this is going to work is we're going to trigger this based off of a form um input, right? I'm just going to use an NAND form for now, but you can use really any form that has a web hook. But if you think about it, like in order to get this information, in order to you know um and I'm always starting with the end result here. I always start with the form and I figure out exactly what information, sorry, I always start with the proposal and I figure out what information I need. I'm moving backwards from that because I care more about what the customer sees than anything else. That's what a lot of people I think sort of mess up. They start with like the data they think would be nice to have and they're like, "Okay, what can I do with this data? " It's like, "No, no, don't do that. Start with the end product, the thing that like you know is going to make a customer want to buy from you and then work your way backward from that and then ask yourself, okay, what do I need to ask the customer? " So, uh, here's some things that I need to ask the customer. Obviously, I need another company name, right? Like, duh. So, I'm going to go company name. Let me see what other information do we need. we need like a project description, but probably the simplest way to do it is with a problem and then a solution. So, what I'm actually going to do is I'm going to have the form have like a problem statement where basically it's like, "Hey, so what's the problem they're suffering from? " "Hey, so what are the solutions you're going to pitch? " And then we'll just go bullet points. Cold email lead genen client reactivation system, best-in-class sales training, easy. Okay? Like, what sort of like line item scope are we talking? and then you know like how soon basically and then it's just going to take the date and then work this out and then we'll also have like a little cost and I'll just use cost as like a string for now but you know feel free to do it as a number if you have some formatting requirements so this is the data that we're going to be feeding in with the form input okay so we're going to be feeding the company name problem the solution the scope how soon and the cost so the example as we see let's just go deposit cost and we can just multiply that by two to get the total cost we'll go 15 that multiplied by two is what 3,69 90 something like that. How soon? Well, let's see. What did I put as my example here for my training? Uh, February the 8th to April the 1st, 2025. So, it's two months. Okay. So, we'll go two months. And I'm going to write in lowercase and I'm not going to use formatting and be very dumb and simple because I want to mirror what I think I and the sales team that is going to be using this form is going to use. What's the scope? I'll show you exactly how I do this. 1k per day cold email infra, 30k leads, and then four weekly Zoom sessions for sales training. That's what they're going to get. As I'm sure you guys can imagine, this is totally something that you could just really quickly scribble as notes during a call, right? Prospect says something, you're like, "Okay, yeah, we're going to get this done. " And you know, I mean, the how long did this take to do? This is like what 50 characters or something like that. You could realistically type that in like 10 seconds while you're talking to the customer. Um from there, let's think about the solution. So solution uh cold email legion system, client reactivation system, and best-in-class sales training for closing. And then the problem they suffer from, they can't generate leads. Everything is referral-based right now. Cool. done. So, this is the input that I'm going to be feeding in the model. And then I'm going to say, hey, if I were to feed you in an input like this, I want you to feed me an output that looks kind of like this. Okay, now that we have that relationship in place, what that means is I can feed in an actual like real um piece of data. And I'll fill all these variables in later. Um, but uh well, I guess I'm going to fill them in right now. I'll now with like an example uh that's a little bit different. So instead of they can't generate leads, let's say they're struggling making YouTube videos, everything is really time inensive right now, mostly because they don't have scripts solution. Let's do like AI script system. AI script writing system scrapes um competitor YouTube vids for ideas and rephrases best performing titles then writes outlines the scope um we're going to get a form you can fill out to generate with competitors that adds them to a DB once per day DB is scraped And you get um however many people are in the sheet times however many people posted videos worth of outlines. Max, let's just say max uh 200 per month. How soon? Let's say two weeks. And let me make the deposit cost $3,525. Okay, so now I have everything that I need to actually test this out on my little example here. And I'll, you know, I'm just using this as a training example, but let's run this through and let's see what happens. So, this is pretty intensive. You know, we got a lot of variables here. We want to make sure it doesn't screw up. So, it's going to take its sweet ass time for sure. You could also do things like um add uh frequency penalties, presence penalties, and so on and so forth if you wanted to be a little clearer about what's going on. But basically, I'm going to take all the variables from here and I'm just going to feed them into our Google Slide. There's one more thing that I believe we're going to have to do. If you think about it, like this Google Slide here, when I um ran it the first time, I replace proposal title with the word hey, right? So, we can't actually do that because this is like one proposal template. So, realistically, before we do this, we're actually going to have to generate a new proposal template every time. Um, but that's uh probably pretty easy to do. We're going to walk through it together. Okay, great. So, we just got the execution. Let's jump through and let's see what's going on. Just going to move over to JSON view because it's a lot easier for me just to make sure that we have everything formatted correctly. Looks good. We do have all the variables. YouTube content efficient efficiency boost for left click. So, one thing that I'm seeing here is just um yeah, we're going to need to provide it some information that we are a system basically context. We are an automation no code agency that develops uh that develops systems revolving around growth revenue ops UTC. There you go. That'll make it a little bit simpler and probably a little bit more accurate for me. But okay, cool. This looks Oh, uh, one more thing is we need to feed in the current date, right? Because November the 20th, 2023, that doesn't really matter. Okay, so why don't we do that one more time? We'll go over here and we'll say um current date. There we go. cuz you know how soon 2 months like that doesn't really provide any context on its own. So I'm just going to go Feb 4 2025. Then we'll go 2 months. Then over here we're same thing. We're going to say current date for 2025. How soon? 2 weeks. Deposit cost $3,525. This should fix the milestone stuff. Let's test it out. I hope you guys see the value in me doing this live. Um this is very much more similar to what your actual build process would be like. like you're not going to get all this in one shot, right? It's not like you're going to know, hm, I guess I'm going to have these fields. You're going to map it out perfectly, send it, and see the results. Realistically, you're going to have like back and forth where you test the output and you're like, that's kind of missing this context. I don't really know about that. Um, and then, you know, go go back and forth in that way. Okay, great. Yeah, it looks pretty solid to me. Um, I'm not seeing any major issues here. So, uh, you know, I could just feed this forward. Like, we could just pin this output and I could just feed this forward into the replace node, right? But if you think about it, if I do that, I'm actually going to be replacing this. So, I don't actually want to do that. Like, I don't want to replace the main template. What I want to do is I want to replace a copy of the template. So, logically, we should copy this. Um, I don't see I didn't see anything in Google Slides where we could just like generate a new one. So, I imagine we'll probably be able to do this with Google Drive instead. Yeah. With a copy a file tool. So, I'm going to do the copy a file tool and I'm actually going to copy the proposal, duplicate it, and then I'm going to update the the new copy instead of the old one. So, in order to connect to this, you have to create a new credential. Then, you're just going to have to go through the same flow that we did before where you go into your Google Cl console cloud account, you create a new account, um you add a credential for NADN, then you have a client ID client secret, and you got to put in the redirect URL there. I already have one, so I'm just going to exit out of that and just use my own Google Drive account. What I'm going to want to do is I'm going to want to um resource file operation copy. And if you think about it, what I want is I just want that I want that proposal template, right? So I could select it manually or I could just copy in the ID and that's what I prefer to do. Just a little bit easier. Okay, great. And then the file name. Um I'm just going to feed in the proposal title as the file name. And for now, we'll just copy this in the same folder. Let me see if there are any cool um options. Copies. Copy requires writer permission. That's pretty interesting. Uh I don't think I'm going to do that. No, I just want them to have all of it. Then I'm going to test this. So we've just created a new system which looks like uh it's sorry a new proposal called automated YouTube script system for left click. Nice. Um so what I'm going to do here is I'm actually just going to pin this. Then I'm going to replace the text, but I'm going to do so using the ID from that. So that's my presentation ID. Now I should be replacing the new one. And if you think about it, um what we just did is now we have a new ID. So I can actually go and this is going to be my main example proposal template, right? So let me just actually paste in the ID of the new one that we just generated. And as you can see, it's just a it's a duplicate of the same one. The only difference is we have um automated YouTube script system for left click written up here. Okay. So now I'm just going to go back here and all I have to do is I basically just have to go through um and then enumerate across the variable names like this um and then replace them with the text from this open AI node. So, like a quick example of what I'll be doing is I'll be going proposal title and then I'll just be feeding in um you know, proposal title here, right? Not exactly rocket science. Um but, you know, it's going to be a little bit annoying cuz we got to go through and do this a bunch of times. Okay. And then for now, I'm just going to feed in um cost and I'll just make it 1850 hypothetically because we've stored we've hardcoded some variables in there earlier. Um but yeah, I just finished mapping all them. We should be good to go. I'm going to click test step. We're going to see what happens. We have a bunch of occurrences changed. one. The only difference is this last one which was cost says two. That seems reasonable to me. Uh and where would be? We be right over here. Okay. So, automated YouTube script system for left click streamline YouTube content production with AI powered script writing. That looks reasonable. I don't really like that this is all like capital case though to be honest. So, probably going to tell the model not to do that. That looks good. Um these are looking a little too long. So, I'm just going to uh like go in and I'll tell it that the description should be shorter. So, you know, aim for like two lines or something. Actually, hold on a second. I think I might have Did I change the size of these? I feel like I changed the size of these. You know, I might have just used a slightly larger size for the template um to beh for this to be honest. I don't know. I'll have to double check. Um okay, let's go back here. Let's see what's going on. So, this looks good. This one looks a little long, so I'm just going to have to make sure it writes shorter. That's fine. You know, it was about 2 weeks. Total turnaround times 10 days, right? That's fine. 1850 today, 1850 when it's finished. Got the thank you. Okay, awesome. So, of this whole thing, the only two things I didn't really like now that I'm testing this and seeing it are I'm down here bottom rightand corner, it just had a bunch of capital case um words. So, streamline YouTube content production. I don't want this to be a title. I just like a description, right? And then over here, it's just a little bit too long. Uh there probably ways that we could like dynamically change this. There might be like a way to like automatically resize the thing like ourselves instead of um having this having to sort of like do it manually. That's fine. I don't really care too much about that. We could also like reduce the size of all these elements and reduce the line height and stuff just for safety. So, you know, I mean, this is more proposal template stuff, but I'm just going to do it um just in case. When we go down to 10 here, we'll go 10 here. here. You know, I'm not like a designer or anything, so I'm sure designer uh a designer would have yelled at me by now. Why would you change that to size 10? How dare you? I think it still looks pretty good. Um, and then I'm going to go back over to our model and then, you know, just as like an input, I'm going to make it a little bit shorter. rule. I'll say if a field is a description field contains the term description, it should be no more than two lines. Cool. That looks uh well, I mean, how's it going to conceptualize a line, right? Let's see how many words was this. I'm going to go word counter. This was 91. Uh, sorry, 14 words. So maybe we'll go like no more than 10 words. If we go 10 words each, then I guess if I make them smaller, like that's fine. Maybe we'll go no more than 14 words. That's quite the constraint. Okay, cool. Uh so just because we've run this like we've copied the file, but it's a different file now, like I'm going to want to copy it again. So I'm going to unpin this data. I'm just going to test the step. We're now going to copy it to a new duplicate. Now, that's been copied over as a new duplicate. Um, oh, you know what? There's one more thing I got to do, right? I got to change the uh scope so it doesn't include the um you know, if it is a description name, do not use capital. Uh do not use title case. Okay, cool. So now that we have that uh we should be able to you know like assuming that we fixed that um we've now copied it. We can pin this as an output. We are going to be using the old open output but that's okay. We're mapping this now. So I'm just going to copy this over use that to open up my second example which going to be right over here. And then I'm going to test this. It's going to go in and replace everything. Let me just see if it looks a little better with the smaller text. It does. Beautiful. So, actually, this is fine. We actually didn't need to shorten it at all um now that the text is smaller, which is good. And yeah. Yeah, we're basically good to go on that front. Um I'll leave it there for now. Uh so, the question is, you know, where do we go from here? Well, my recommendation to you guys is uh this is a free option. So, I'm going to show you guys how to basically do everything I just did except in Panadoc instead and include like a payment module. But my recommendation at this point is if you guys want to stick with the free option, then just send them an email and have a link over to the uh proposal and then you know in your email just ask them if they want to pay um or you know maybe even be prevent uh pre proactive actually send them an invoice along with the proposal. If they have their thumbs up and they're ready to move forward then just send them an invoice on your invoice. Maybe you have some like little legal ease by uh paying this thing. You accept our terms and conditions. Terms and conditions goes to some page in your website that just has like some very basic stuff. uh you know, I've never really been super worried about agreements and so on and so forth. So, uh like personally, I probably wouldn't. And the best news is we can do that together. So, I'm actually going to show you what that would look like or how I would build it out if I were working for my own company or a client. I just head over to the Gmail node. Um what I would do is you could send directly or you could draft. Like, feel free to do either. Um I'm just going to send for simplicity. You're going to have to connect yourself a credential. Same idea as before. I think Gmail's a little bit easier because you can just sign in. Um, but yeah, you just click sign in with Google if you're on the cloud console account. That'll automatically just connect to you. Sorry, I'm a little out of breath cuz I had to run downstairs in between cuts and grab my groceries. I'm just going to use Gmail account 3. And then uh, you know, if you think about it, like you also do need an email address in the form, right? So, I'm going to assume that this form input that we put together has an email address and we can fix it all up later. But for now, I'm just going to go uh nicholas@gmail. com. And then I'm just going to paste in reproposal 4. Then I'll just include the company name. Uh, which I think was just going to be leftclick. Yes. Then I'll say, hey, you know, whatever the first name is. So maybe we'll go Nick. I don't like how this is not multi-line. Can I make this multi-line? Yes. Um, hey Nick, thanks for the great call earlier. I had a moment after our chat to put together a detailed proposal for you. Please take a look at your earliest convenience and let me know your thoughts. You can you'll find it here and then I can just put in like the link. Now, if you think about it, this link is always going to be formatted the same. It's just going to be this right here. And then we'll be feeding in the presentation ID right over here. So, what can we do? We can source the variable. And I'll just go JSON presentation ID. Now, we'll be filling it in like this. And then voila. You know, we have like the link in the email. There are better ways to do this, of course. We could do HTML. If you have any questions, let me know. I've also sent over an invoice for the amount um just to keep things convenient. Thanks, Nick. Okay, I think I'm just going to leave it at that. Um, and then this is plain text, right? You can actually do HTML as well. If you do HTML, um, you'll actually be able to like add it as a link in the email. I'm not going to do that. Um, just for simplicity, but yeah. Okay, let me just turn a pen and attribution off cuz you already know they're trying to sneak their marketing in here. Okay, and I'm going to go over to my personal email here and I see a link right there. Hey Nick, thanks for the great call. I had a moment after I tried to put together a detailed proposal for you. Please take a look at your list convenience. Let me your thoughts. You'll find it here. I've also sent over uh I guess I've said let me know twice. I've also sent over an invoice for the amount just to keep things convenient. Um docs, so let me just change the let me know invoice for the project just to keep things convenient. Can get started anytime that's sorted. Let's just go anytime that's sorted to make it abundantly clear. You got to pay. Cool. So you give it a click. What do you get? Voila. you get your customized proposal, right? Very clear, very clean, not at all complicated. Um, and you know, although the fact, you know, despite the fact that this doesn't really have like a way to sign, um, like you usually sign proposals, uh, it's free, 100% free, doesn't cost you a scent, and, uh, you still get like a very high quality impression on the client end, which is valuable. So, if I were just to run this whole thing from start to finish and just like kind of eliminate all of these pins just to show you guys how it would work. Imagine I just, you know, we just had a conversation or something. Um, I just click test workflow and, you know, I'll fill out the form in a sec. But now the Open AI model is like generating a bunch of text. Then the Google Drive is going to be copying it. We're going to be replacing it and then it's going to send over Gmail. If I just kind of back it up a bit and just refresh my inbox and actually just check these Walmart deliveries. Um voila, we have the same sort of email. Seems reasonably customized. And then as you see, you know, we have like the title, we have like the nicely fitting um uh sections here. Same over here. We got like the sexy timeline. We got the cost. Uh and you know, I click this in one button, right? Pretty simple, pretty straightforward. And this is good verification that this doesn't just work on like old data. This works on new data, too. Okay, great. So, let's just do let's do one thing before we move it over to Panadoc. Let's um let's add a form and then let's just replace all the data here with like actual live data with the form. And then we're also just going to want like a couple more pieces of information. We're going to need like an email address to send it to obviously. Uh and if there's anything else that comes up, I'll deal with it. But yeah, we're just going to want to delete this trigger. And then what we're going to do is we're just going to go N8 form on new NAN form event. We're going to trigger the flow. So, let me connect this. What I want to do is um I want to actually go and I want to create um you know this whole thing. So what I usually call this I call this like a discovery call logging form or like a sale let's just go sales call logging form and this call logs a or sorry this form logs a sales call and automatically generates a proposal and now we can actually go through and just ask a bunch of questions. So um you know let's say prospect First name, we'll go last name. This is just useful information to have. Company name. Website. All Um, and let's actually get into what we were generating, right? If you think about it, we generated a problem. We generated a solution. These are both questions. So, problem, solution, uh, cost, and was there anything else? Let me check out the open AI node really quickly and check out the prompt. Company name, problem, solution, scope. There was a scope question. And then how soon? Okay, so we're going to go back here. We'll go scope. And then finally, we'll go how soon. I'm going to make them all required for simplicity because I don't want anybody on my team or somebody else's team to um have the possibility not to fill this information out. Like you should get the first name and the last name, the company name, and the website um you know at minimum. I don't see a URL text area thing here, which is unfortunate. That's okay though. Anyway, and then we're going to respond when the form is submitted. Um and then let me just check if there are any options we want. I want to take off the animated attribution obviously and then we should be good. Okay, great. So now if I click test step, what's going to happen? I'm actually going to go get a form that I can fill all this data out with. So for the purposes of this, I'm going to say Peter Sarif leftclick go left click I problem. I'm just actually going to go I'm going to paste in the problem statement that I hardcoded over here. So let's do this one. Paste in the problem. This should probably be a text area now that I'm thinking about, but it's okay. Paste in the solution. Paste in the cost of 3525, right? We should also paste in the scope. And I should readjust where the scope is just so that it's like a little simpler. Then how soon? I think it was two weeks, right? Okay, cool. So now I'm going to fill this out. We're now going to get all these events, right? So now we have access to this. And now we can just go down the list. We could just you know replace these variables with um replace these this JSON I should say with the variables. So it was the company name right over here. Um problem statement was you know dollar sign JSON uh and it looks like we are now using brackets but you don't have to you could use uh whatever you want. So the problem statement here was just problem and oh this is company name because we split it right we had a space in between. That makes sense. Then we have a solution. We're going to have a scope. current date. That's just going to be automatically filled in with now. Uh we should also format that. Now that I'm thinking about it, let's just format it as uh for common formats till string may be easier. We might just go like toal string. I think we might have an error function in here. Is that why? No, I have no idea why. But anyway, um, now we get the current date. Same format. How soon was JSON how soon? Wonderful. And then the cost just going to be JSON. cost. Perfect. Cool. So, we've now mapped all the variables in, right? We have all the real variables, like actual live variables coming in with data. So now I can actually run a test step and it's going to go through it's going to generate all the data for me. Similar to how was doing before, but now it's being triggered off of a form input as opposed to just like my own whims and desires. And as amazing as my whims and desires are, ladies and gentlemen, form outputs are way better. Okay, cool. This is already mapping a variable. This is going to be fixed. So that makes sense. Replacing this text. This looks good to me. It says there's some error fetching options from Google Slides. That's just because the presentation ID isn't hardcoded. Then I'll have a Gmail. Let's just make sure this Gmail um actually uses the, you know, everything that I want it to use. So, let me actually just go through test everything up to and including this. So, it'll take the OpenAI text and then map it in here. I'm just going to pin this. Make my life a little bit easier. And then over here in Gmail, uh you know, I can just go down to on form submission and I can just grab the I did not ask for the email address, did I? That is so funny. Okay, so you're going to want to ask for the email address. Despite the fact I don't have an email address, I could still technically map this. I'm just going to go it is item. json email. I'll capitalize it as well. And then even though I don't have this, I know that it's going to work when I go over here and then I add a new field called, you know, email, right? Because it's just, you know, it's just code we're mapping at the end of the day. Uh like, you know, it's not necessarily going to work until we fill out the form with the same output. But for now, we can um just modify this with uh email. We'll go Nick. Uh let's go Nicholas@gmail. com. There you go. I'm just going to save this. Make sure it's good. Jason good. We will pin that as well. I just like when I pin stuff, I like pinning everything. Sue me. Okay. And we're not going to have access to this right now, I don't believe. Uh if I test this, what happens? Yeah, we don't have access to this right now unfortunately. Even there's item. json email on the for Well, maybe we do actually. Maybe we do. No, it doesn't look like we do. I think we have to rerun the whole flow if we want it. Unfortunate, but is what it is. Um, cool. Uh, I'm just going to rerun this one more time, uh, just on my end. Make sure that everything checks out. But then from here on out, we're just going to add the we're going to swap this over to Panda for anybody interested in like leveling this up even more. And then we're going to call it a day. And as you can see, this is not a complicated flow. There's like 1 2 3 4 five realistically, maybe six if you count like the invoice step. There like six elements to this u from start to finish. This one is just duplicating a duplicate, which should honestly have some functionality built in, which it doesn't. So five if you want to call it that. Um, but you know, it's something that's really high ROI, something that you can actually slot into a real business as opposed to just like looking cool and not actually doing anything for you. So, yeah, let's test out this puppy one more time. Okay, looks good. We're going to submit. Nice. We got the data. Let me just see if the website field is a text area. It is. It's okay. Well, we should have just gotten the form. Cool. We did. We got the email address and everything. Awesome. Cool. If I go over here now, do we still have access to this? No. I think I need to rerun all this stuff, right? Oh, we do apparently unpin replace text to execute. We're going to have to unpin the old data unfortunately. So, I'm just going to unpin some of the old data here. And I should unpin because we're just replacing the replacement. And then we'll pin this. Go. Now, we'll test it. Cool. Looks good. We automatically got the email and then we have our, you know, proposal and so on and so forth. You might want to decrease the line height if you just copy my template verbatim, but feel free to do whatever makes you happy. From here on out, you know, we're going to shift gears. Instead of replacing text in a Google slide, we're going to do in Panda doc for people that are unfamiliar. Panda doc's a really cool platform that allows you to, you know, take care of a lot of stuff that otherwise you'd sort of have to do manually. Panda do open here. Panda doc, as you can see, is a little bit more professional looking than uh slides and so forth. There's a lot of stuff that we can add. We can add, you know, text blocks, video blocks, image tables, quotes, uh, page breaks, table of contents, stuff like that. This is just an example uh proposal that I generated for a fictional company that I put together during my Maker School training. If you guys want to see how I put this together, um, you can find it all under Maker School in classroom. I do it all in month one. But basically, there's like a big proposal template that I do. And this proposal template, I run through the entire building process for people that might be interested. Anyway, this is my proposal template. As you can see here, very similar idea. Anything in yellow is just a variable. So what I'm doing is I'm like weaving in my own procedural logic with like templated text and stuff like that and then with AI generated text. The reason why I'm doing this is cuz I just want to make sure that like the parts that are very valuable um I wrote myself. I didn't have AI right. This the parts that are really valuable I want to just be clean and powerful. Um anyway top to bottom everything works basically the same until we get to this section where it says your investment intelligent lead management system for leftclick. Right. And then we have a price. The way that this works is in Panodoc, you actually hook this up to a payment button. And what happens is after the proposal is sent, uh, you basically, sorry, after you sign the proposal, uh, you have the option to pay immediately. So, it's great for collecting. Um, I use this in order to scale my agency to 72K a month. Um, I use this with one second copy where I scaled to 92K a month as well. This is just like a much cleaner way of going about things than the way that most people handle agreements and stuff. And I have a video on that. um you know providing logic around my proposals and stuff like that if you want to check it out in Maker School as well. I'll stop soft pitching that I think we're all adults here. Join my program if you want to get better at this sort of stuff. But yeah, let me actually run you through what this looks like. So in order to do this, basically we need to make a request to the API. Okay, it's not enough for us to do a request to like a panda dooc node because there is no panda dooc node, right? I just checked out panda dooc. You didn't find anything. But you know, you can do it with the HTTP request and they're 100% right on that. Now, in order to get this done, what we need to do is we need to feed in this giant super scary block of text that looks like this. And it's basically just a ton of JSON that we formatted to be um you know like curtailed to this particular template. So, this template's a little bit different, right? We have tokens, a value called client email, sender email, client scope one, client scope 2, client scope 3. As you can see, we're actually generating multiple client scopes as opposed to just one. And the reason why is because we are doing it like this, right? One, two, three, four, five. So I believe I'm feeding in five in total. Yeah, looks like different scope items, right? How crazy is that? Then we have client company, center company, client last name, sender last name, client timeline 1, timeline 2, timeline 3. These are all just takes on the same idea. So really, in order to modify the system that we previously had into a system that's capable of operating with this, all we need to do is we just need to output slightly different objects. And that will require us to just once adjust our AI generated copy so that you know the objects look like what this is expecting. And then two, we just need to update those. There's a place for us to put price and everything like that. So I'm just going to go ahead and like do most of the grunt work, but I'm going to show you while I do it. I like cut at several points just so you can see exactly what that looks like. Okay, first things first. I'm going to jump into the expression editor here. And if you think about it, I actually have like a title variable already. So I'm just going to go proposal title right here. And then there's client scope one, client scope 2, client scope 3, client scope four, client scope 5. So there are five different client scopes. So I need to make sure that my object that I'm opening open actually has five scopes instead of just one. Just delete all of the scopes descriptions. We'll just change them all so that it's just the titles, right? Make sure they're inside of the string. Okay. And I just ran it using this API format. This is a very long and kind of scary object for most. So don't sweat the specifics too much. If you guys want to learn how to make something like this for yourself, I will be covering how to do API connections in the next video. But the end result is we end up with a proposal that looks something like this. AI powered script writing system for leftclick. You know, here's some information about what the core problems are. Here are, you know, some pieces about the solution. Right? As you can see, my proposed solution to the problem above is as follows. Tack these challenges. We propose an advanced AI script writing system that automates your content creation workflow. The system will script compared to YouTube channels for content ideas, analyze in the best performing titles. Use AI to rephrase these titles and generate detailed outlines. It could be with ready to use scripts to streamline your video production process. I consider this reasonably straightforward and I'm confident I can do an outstanding job here for you. If I wasn't, I wouldn't have put together this proposal. Right? We got all the scope stuff here. We got the timeline. Uh and then over here, this probably the most important part. We have the price. The way the price works, we actually have 50% due up front, 50% due in signing. we check our little payment note. Essentially, what's going to happen is when we send this, they're just going to receive an invoice um the moment that they sign for that amount of money. And I'm seeing here that I think I used an extra capital L. But, you know, we all can't be perfect. Uh yeah, that's more or less it in a nutshell. So, you can take the same approach that I just showed you guys how to do today to virtually any proposal platform or virtually not even just a proposal platform, but virtually any asset that you create. Cuz you know, creating a Google slide, if you think about it, that's creating an asset. That's basically creating like a lead magnet. It's creating a PDF. You could export that in a number of different formats. You could give it to somebody. You could print freaking books with that, right? If you take this core idea here and then extend it, you could do uh a number of things. But I hope at this point I've at least just given you guys the knowledge to be able to build a simple AI powered flow without necessarily overwhelming yourself with, you know, talk about AI agents and stuff
Website AI Agent
like that. Nice job. You know, have an AI proposal system that creates professional customized proposals in real time during sales calls. We built that all out live and hopefully you guys understand what an actual proposal generation build process looks like. The whole idea is to give you guys a massive edge when closing automation deals and then also give you another product in your toolkit that you can sell. We're now going to be building a website AI agent that handles visitor conversations, answers questions about your services, and also allows you to book meetings directly into your calendar. This is not just a chatbot. It is a lead qualification and booking system that works around the clock. a automation agencies regularly to charge anywhere from$1 to $2,000 to implement these systems because these automate the entire lead qualification process. It's also a great introduction into agents and how they work more generally. Let's dive in. So, here's the AI agent right here. As you can see, it's very simple. We have a simple AI agent flow with a chat message that goes into this decision maker which calls the Open AI chat model, stores contacts in the window buffer memory, and then we have a few tools that we're calling the Google calendar create event, Google calendar get all event. That's actually not very important. There's a million and one ways to set up agents. The thing I want to impress upon you is this looks simple, but in reality, when you use AI agents in business, they tend to be very simple because businesses in practice don't really use these massive waterfall AI agents that you guys are probably seeing with like a million in one nodes where an AI agent calls another AI agent and that agent. And the reason why is because the output tends to be a lot less predictable and a lot less consistent. And if you're a business, your revenue is driven by consistency. You want to constrain the total realm of outputs down to something manageable. So this in practice is typically what automations that make money look like at least the AI agent forms. So set your timer. Let me show you how to build an actual AI agent just like you saw in the intro in just a few seconds. First things first, open up a new NAND workflow. Click add first step. Type the term agent, then open it. You're good to go. Next up, select a chat model. In our case, we're going to be using the Open AI chat model. And I'm just going to be using the default functionality to get you up and running as quickly as possible. Going from 0 to one really is the most important thing. Then under memory, select window buffer memory. Set contacts window length to 10. You can actually get away with substantially longer because of a trick I'm about to show you. Now, you can chat with this just like you chat with any model. Click the chat button. Hey, how's it going? And you'll see that GPT4 Mini will give you a response. But the really cool thing is if you go to when chat message received over here and you select make chat publicly available to on what you have here is you have a chat URL that you can actually just link to directly. Now you have a hosted instance where you can talk to the model. Now the question is how do we take this hosted instance and then put it on our website. I have a website available over here. This is my content writing company called 1 second copy before chat GBT came out and you know rose to prominence. Um, this was my primary income source and this is sort of how I paid the bills. We had a team of very high quality journalists. We used some GPT2 and GPT3 uh much older models to basically help pre-draft a lot of the content and yeah, it was a pretty solid business. What I want is I want in the bottom right hand corner there'd be a little chat widget that pops up like you guys saw. So, in order to do this, if we go back to our anen chat agent, what you'll see is there are a couple of settings on your mode. Just change hosted chat to embedded chat. Okay? and then head over to this link. You scroll down a bit, you'll see that there's an option for installation called CDN embed. That stands for content delivery network. And basically what it gives you is it gives you a snippet of code that you can add to any website, whether it's customcoded site, a WordPress site, a web flow site, a Wix site, Squarepace site, whatever the hell you have. And all you need to do is copy and paste this in and then replace the web hook URL and you'll actually have a little chat widget built to spec that runs the rest of your flows. If we go back over to my actual website config, which I'm doing in something called netlifi, if I go to site configuration and then scroll down to postprocessing, there's actually a setting for me to add a snippet of text over here. So, what I'm going to do is I'll call this nadn chat agent. Paste in the HTML and all I need to do is go back here and get this web hook URL. and then paste it here where it says your production web hook URL. Okay. Now, the way that you do this is going to depend on the service that you're running. I mean, I'm running this custom website on Netlifi, so it's pretty easy for me. All I do is I just refresh the site. And now, what you'll see is in the bottom right hand corner, you have your little chat widget. Hi there, my name is Nathan. How can I assist you today? So, what's really cool about this is now running. I mean, you know, it's asking me the same thing because it doesn't have access to this, but this is now pinging the same AI agent flow that we just set up. We actually have two-way communication. And so, you can actually test this. You can run this. It's live. And the changes that you make over here are going to be reflected back over there. Now, a couple of other minor changes that we want in order to actually have this thing be useful for us. The first thing I'm going to want to do is adjust the prompt a little bit because the way that AI agent prompts work, it's kind of unfortunate in N. If you use the default prompting behavior, what you're doing is you're always taking the prompt from the previous node and just recycling it over and over and over again. There's also sort of a hidden prompt that you don't really see, which can muck up the quality of what it is that you're trying to do. So, what I recommend is to use this template that I'm about to show you to define what's called a system message instead, which is a static fixed prompt the model will always have access to and then having it take the input from the previous node automatically and recycling it. This is the best of both worlds. I've seen a lot of people try to use the define below section. Um, just my understanding of the way that these technologies work under the hood make me feel like this is not the ideal way to do it. So, I've saved a prompt over here which I'll run you through in a moment. And if you want to set this up for yourself, I highly recommend just using some variant of this prompt. Now, it's dynamic. So, you're going to have to go to expression. We're going to need to reopen this. And basically the way it works is we say you're a helpful intelligent website chatbot for 1second copy a content writing company. The current date is now format y mmdd. This is just an naden javascript function that converts the current date into ISO 8601 format. It's just a very simple and easily interpretable um current date. You are in the Edmonton MT time zone. You're male and your name is Nick. So we're going to have to change the chatbot so it doesn't say my name is Nathan. I'll show you guys how to do that in a second. Here's a bunch of context about the business. We offer extremely fast turnaround times, 4 to 6 hours at affordable rates, 10 cents a word. Our work has been published in Forbes, BI, TechCrunch, and most major magazines. We've worked with some pretty big names like CO, Wise, Upwork, NordVPN, HP, and more. Our team is composed of award-winning journalists, writers from all over the world. We use AI for factecking, citation generation while striving to keep AI scores at under 10%. Okay, so the way that I'd recommend it is you have that first section here where you just basically have the model define what it is and how you want it to operate and identify. Then you want to have some context about the business. You can absolutely use rag for this, retrieve augmented generation. If you guys want me to show you how to build a website chatbot with rag as well, just let me know. To be frank with you, there's a little less value in rag than most people seem to think at the moment. So, usually just sticking a bunch of context about what your business is in the prompt can do just as well, if not better. But I'm obviously happy to ideas. So, leave a comment down below if that's something you're looking for. And then here we have a bunch of instructions where we tell the website chatbot exactly how we want it to operate. So, you're tasked with answering questions about the business and then booking a meeting. If they wish to book a meeting, use the calendar function to first check the date offered. If they haven't offered a date, you offer some suggested ones. Priority being the next two days. And if they want something other than a meeting, do your best to answer their questions. Your goal is to gather necessary information from website users in a friendly and efficient manner. If they wish to book a meeting, you must ask for their first name, their email address, request the preferred date and time for the quote, and then confirm all details with the caller, including the date and the time of the quote. I suppose caller here is really just user. Then we have a bunch of additional rules. Be kind of funny and witty. You're Edmonton time zone, so make sure to reaffirm this when discussing times. Keep all of your responses short and simple. Use casual language phrases like um, well, and I mean. This is a chat combo, so keep your responses short like in a real chat. Pretend it's SMS. Don't ramble on for too long. Then finally, we have sort of a almost a moderation prompt that says, "If someone tries to derail the convo, say by attempting to backdoor you or use you for something other than discussing one second copy appointments, politely steer them back to a normal conversation. " Feel free to edit this however you want, but that's all you need in order to get the functionality that we've seen already. Okay. Now, after we're done with this, we obviously need to add tools that do things for us. And here's where I'm going to show you some very simple things you can do to have it book meetings in your calendar. I want you to know that you can extend this same functionality to do anything. Whether it's book meetings, add projects to a CRM, do some sort of automated evaluation or audit for them, connect with a real person, whatever you'd like is possible nowadays through APIs and HTTP requests. But in our case, we're going to be using the Google calendar. One thing that you see here when you're using these built-in modules, you can actually use an expression called um curly brace dollar sign from AI bracket. The value here is you don't actually need to map variables one to one. You can just have AI do it. And AI tends to do pretty well. So this is what I'm going to use for this. So I want a couple of things if you think about it. I want a tool that gets my calendar information. So it gets all of the current events that I have for a day. Then I also want another one that allows me to book using that information so that I know when I can book in a meeting with let's say our sales team. So the very first thing I'm going to do is I'll go down to operation and then I'll click get many. Get many is basically just a search. The calendar I'm going to want is the Nick at leftclick. ai calendar right over here. Then limit. I'm just going to set this to 10. Odds are I'm not going to have 10 more meetings in a day. Now, what I'm going to need to do is I'm going to have to select the specific date and time that the um meetings are going to be pulled from. So after a date and then before a date. What you can do is you can just select after a current date before a current date. Um if you set like the same date and time, it'll just pull you all the events for that day. So I'll go to expression. I'll paste in from AI and then I'll say after date. Then I'll go to before. I'll paste this in and go before date. Down here I'll go order by query show deleted hidden invitations. Oh, time zone. Sorry. And then what I'm going to select is just my own time zone for simplicity. So remember how I mentioned there we were Edmonton time zone. I'm Edmonton or America Edmonton which I think is GMT minus 7. It's important to have your time zone configured exactly or you're going to have some issues with the workflow obviously because the times that you tell people are going to be different from the times that obviously people tell you. And then we're going to add one more calendar tool. That's going to be the create. I'm going to be creating in the same calendar I'm checking. The start date is going to be from AI start date. The end end date. And there's one more thing I'm going to want to do. Let's move down here to summary. and then go expression meeting summary. That way when the AI creates the event, we're going to have some sort of meeting summary associated with it. Oh, and there's actually one more. We'll do attendees, which is right over here. Now, I'm always going to be the attendee, and we're always going to have some other attendee, which is going to be the user email. Okay, AI is coming up with all this stuff based off of the context of our conversation, which is pretty cool. Okay, sweet. So, given everything that we know now and given how it's set up, first I'm going to test this right over here. And I'm actually going to move over after making some changes and test it in the 1 second copy website. Anybody's on my website right now, that means they're currently seeing it, which is pretty neat. So, let me actually just delete all this. Let me pretend I'm interacting with an agent as if I want to do something. Hello, thanks for this. I'm interested in 1 second copy. Tell me a bit about it. It's now going to be pulling in data from the context. Because it's context, aka the samples that we provided directly in the prompt. It's going to operate very quickly. Now, in my case, it's just pulling data directly from there and just listing them line item by line item. You can have it respond to these sorts of queries in a specific way if you want. The context window is big enough that you can write whatever the hell in that system message. Let's see how it works with the calendar functionality. Do you want to set up a meeting to discuss details? If so, I can help you book that. Sure thing. You got 30 minutes sometime in the next few days. What's your time zone? I'm in Edmonton. I can check our calendar for the next couple of days to find a 30-minut slot. Just give me your first name, email. Let me know your preferred date and time. Sure thing, Nick. Let's do my main email here. And sometime after 300 PM tomorrow or the next day when I call this, what's it going to be doing? It's obviously checking my calendar availability and responding through the model. Looks like tomorrow after 3 p. m. is booked up for our team. However, I do have a spot available the day after tomorrow, March 2nd at 3:30 p. m. Does that work for you? Now, it's checking my current calendar and I do indeed have a spot there. Um, does that work for you? Sure thing. Could you book? And now it'll go through and it'll actually complete the booking. Well, before it does that, it's going to confirm with me. Great. Just to confirm Nick nicholas arrive atgmail. com date and time. Shall I go ahead and book that? Absolutely. Thank you. Now, if we enter that out, it's now going to be pulling the Google calendar event and actually going and creating it in my calendar. What's really cool is it's going to offer me a meeting link that I can then take a look at. So, in order for me to do this, I do need to open up the specific account that I'm in. And voila, we now have it right over here. meeting with one second copy. Now, in my case, it's getting nicks arrive twice because my other email address was nicholas@gmail. com. I fed that details in and then it got my picture from Google. But, um, yeah, as you can see, it's that easy to set up an AI agent that actually does something. Realistically, all this takes is a few minutes. Now, there are a couple changes that we're probably going to want to make to the AI agent on our live website. Obviously, we're going to want to brand it a little bit differently. And the agent says that his name is Nathan. How can it assist you today? Now, if you want to change the default messages and all of that, what you can do is you can actually just add this directly in that code snippet that I showed you guys a moment ago. And so, you can do things like have some initial messages in an array. You can do things like add some metadata, the mode, and so on and so forth. If I go back in my case to this, you can see the create chat function right over here. So, this is basically what I'd be editing. Now, unfortunately, I don't believe I can access the like I don't think I can edit this. I think I actually just create a new snippet. Okay. Okay, so I just went back in and pasted in initial messages array being, "Hi Nick here, let me know if you have any cues. " And then if we go back here, what you see is we've now changed the text in the chatbot. Very simple and straightforward. Super easy to do. If you want adjust to adjust more configuration settings, just check it out over here. You can do things like change the text, add an input placeholder. Believe you should also be able to change the color. Yeah, right over here with some customization using CSS variables. You can make this window look however the hell you want. You can brand it entirely on your own. And I think you can also remove that little pesky NAD message, although I don't know exactly where that would be. Anywh who, I hope in this video I've at least shown you how to get started with the simplest version of an NADN AI agent website chatbot. It's nowhere near as hard as most people make it out to be. If you're smart about how you put this stuff together, you know, you can take this approach that I just showed you and you can make a couple of minor adjustments to it, but within 13 minutes or so, you can actually have something on a website that you could charge money for. You can run custom functionality for you can actually have access a calendar or adjust a CRM or actually do something with tools. Cool. You've now built a
Social Media Content Repurposing Engine
website a agent that qualifies prospects and books meetings automatically. Hopefully, your lead genen or the lead genen of your clients now working on autopilot. Next up is we're going to be building an AI content repurposing engine to take any YouTube video or podcast and automatically generate Instagram posts, LinkedIn content, and Facebook posts as well, all formatted and beautiful and ready to publish. This system is pretty valuable for coaches and consultants as well as content creators. Basically, anybody who needs to maintain an active social media presence and agencies typically charge anywhere from one to two,000 bucks for this automation because it solves a major time sync problem which is repurposing long form content. Okay, so now let's build that system to turn, you know, 1 hour of content into days or weeks of social media posts completely autonomously. So the way that this works is we start with a form submission where I put in the URL of a podcast. I then get the transcript via a thirdparty service which costs 1 cent per transcript something like a hundred podcasts per dollar. Then we will use OpenAI to get a bunch of data spin different transcript sections and do different things which I'll run through in a moment. We'll then split that out, loop over each item and then here we will generate Instagram posts, LinkedIn posts, Facebook posts before finally generating the conccomment images as well. We'll then do some data processing then we'll add it to a database. Then finally we'll just do some merge and then update a form. And what happens on the back end is once we've done all this posting what we're doing is we're essentially updating a database looks something like this. Okay, very simple and easy to manage four column database called date added post body post image posted on with the platforms that we want to post to down below. And our system is essentially once a day going to set this to run every morning at 7 checking through this database to see what new additions we've made. So in this way, our system is entirely dynamic and um it never overwhelms the service that we're posting on. We can generate 10 or 20 or 50 new posts across all these platforms and then we can just drip them out according to some schedule that we predefined. After we've checked the Instagram posts, I then upload to Instagram using their graph API, which I'll run you guys through before updating the Google Sheets database. And we do the exact same thing with the LinkedIn posts. It's just we need to do an HTTP request for that. And then the Facebook posts as well. In terms of what this looks like live, let me actually test this puppy out. Let's test this workflow. See a new form that just opened. I'm just going to feed in a podcast right over here to this insert a podcast get content endpoint. Second, that's done. As we see in the background, we are now getting the transcript via a third party web service, one of my favorite web services, Ampify, which I'll cover over the course of the video. This transcript is going to come to us very nice and perfectly manicured. And then after that, we feed that to this OpenAI module. This open module's job is basically outputting a very big um JSON that contains an index with the number, the paragraph transcript, some context and feedback, and then a deep explanation of what the section that I'm talking about is along with an image description that we can use to generate some JPEGs, and I have some rules down here. Because we're feeding in a relatively long transcript to a model that has a context of 128,000 tokens, takes a fair amount of time to do this run. It's usually about 30 seconds or so. Uh but after that we're going to split it out and continue. And then as you can see we are now generating the posts and then adding them. So we just did Instagram, now we're doing LinkedIn and finally we'll do Facebook as well. If we go back to our database, you can see that we're actually adding these as we speak. And so this is populating that sort of middle ground database which I like. Now on the back end, now that we've added these, what we can do is we can test this workflow pretty easily. So we're going to upload to Instagram first. We're going to post on Instagram and we're also going to update the Google Sheets database to tell us that it's posted. We're going to do the same thing with LinkedIn. And LinkedIn, we need an HTTP request to do. And we're going to add that there as well. And we can actually see these live. We'll just wait for this to finish posting. But if I refresh my LinkedIn company page, you can see the post has actually been made. Actually, I've done two cuz I just did one other test. But I chose like a pretty friendly kind of style here. I figured that I would I don't know some company that did watercolor styles. Obviously, my actual brand Leftclick is not like that at all. But, uh, yeah, just wanted to give you guys some freedom here. You guys can generate these images however you want. Alex Ramos is doing a lot of this stuff recently, which I find interesting. He's, um, applying a specific style to a specific type of content. And then, yeah, we've now posted them across all, you know, Instagram, Facebook, and LinkedIn, which is pretty cool. And then if we go back to our database back over here, you'll see that we now have the posted on fields as done. Um, which means that we have essentially just ran through our database and, you know, posted and dripped these out over time as opposed to all at once. Okay, so I've yet to actually build the system at this point in the video, but first thing I always like to cover before I actually do the system is why am I doing the system? Is the system important? Does this solve a problem? Ideally, you would start with the problem and then you build a system that solves that problem, not the other way around. I think a lot of people are kind of putting the cart before the horse and they're building the system before actually having a need for it. So, I know for sure that the system is worthwhile and solves problems because I talk with people all the time that have these exact customer problems and you can sell the system or you could build the system yourself to solve those problems in your own business. What are some issues? Well, AI podcast repurposing engine solves the content need. So, it allows us to generate a large amount of content from just one long form episode. Allows us to reach a much larger audience with the same marginal amount of effort. Zero additional recording time, which is cool. We get to maximize the content investment. And then if you wanted to sell this, it's why we put this in a different color here. Um, you know, you could do so for a$1,000 to maybe $2,000 service, I would say, just because it's very simple. It runs in the background. I'll show you guys a simple input method to like make all this stuff work and look hunky dory. And yeah, very straightforward, not at all difficult. So that gets us to the more important question, which is how. And the how is what we are going to be dealing with in this video. What I'm thinking of doing, and I, you know, I got like two or three nodes in for I was like, you should probably record a video on this, is we're going to start with some YouTube podcast URL input. Okay. So, what I'm thinking is we're going to have some form or something, probably a form where I can fill in the URL of the podcast that I want to generate content for. And this is the simplest way I can think of doing this. Sure, you could do this automatically. You could track podcast posts on a YouTube channel, whatever. But I'm just going to do a form. So, we'll trigger it manually. Then, from there, we're going to grab the transcript somehow. So, there variety of different ways you can grab transcripts in actuality of videos. Um, the simplest is Ampify, but you could also do something like OpenAI's Whisper. I mean, to be honest, there's like 500 of these, so I'm not going to go super in-depth there. But, um, what I'm going to do is I'm going to grab the transcript of the YouTube video, and then I'm going to feed it into a big content router. And this is where the rest of my system is sort of going to come into play from. So, what I'm thinking is we're going to start. We're basically going to need like some sort of GPT call, some AI call. Let's just call it like a large language model call. Probably use GPT4 or maybe 4. 5. And then this is going to generate me some specific like Instagram content. Okay. I'm going to do the same thing with, you know, another GPT4 call. And then I'm going to generate probably like some Facebook content. As of the time of this recording, the Twitter API or the X API, I should say, is uh like 200 bucks a month or something like that. So, I'm not going to pay for that um for this video, and I don't think a lot of people will either. So, we're just going to skip Twitter or X for now. But then we're going to do like some LinkedIn content. And also, what I think would be really cool is if we um if I give you guys everything you need to actually like clip. So, there's a couple of platforms out there. One's called Opus Clip, and there are a few other ones where basically you can feed in a longer video. Then you can actually generate clips from that video um using AI timestamping and stuff. Now, unfortunately, these guys don't have an open API, so you can't actually just like have your API call and then use that to generate, but um I'm going to give you everything you guys need in order to do so. And I'll actually walk through the API most likely. So, what I'm thinking is we're going to use a GPT and then maybe we'll like generate timestamps. And for now, we're just going to like have all those timestamps be generated with all the rest of the content you need, maybe like some hashtags and everything. And then, you know, you can either feed this into some sort of flow for an editor or whatever, and then have it generate a bunch of stuff. Okay? So, there's nothing really magic here. I mean, I'm just recombining components of different things that I've built before in the past, but I just wanted to run you guys through what my thought process is at this point in the process. This is what I think it's going to look like. Okay? And everything sounds nice before you actually get into the building, but uh yeah, let's start there. Okay, so I'm just going to use this as our road map. And then for now, we're actually just going to jump back over here to NADN. I have a little NAND workflow set up called AI podcast repurposing engine. And so really, if you think about it, like what is the first step? Well, what a lot of people like to do is they like to start at the beginning and then work their way forward, but I actually kind of like to start at the end and then work my way backward. Now, the end is relative in this case, but I just I actually want to go scrape the thing with Apify first. Like I want to scrape the YouTube video and I want to verify or guarantee that I can actually generate the transcript. That's kind of the first thing that comes to mind. like maybe it's intuition or just because I've dealt with a lot of these projects but that usually is the rate limiting step. It's like, hey, can we get the data that we are planning on doing all this fun stuff with? Because if you can't get the data, you can't really do anything else, right? So, let's first of all verify we can actually get the data. So, in order to do so, I'm on this platform called Appify. Basically, this is just like a big marketplace for scrapers that other people purpose-built that allow you to do things like get YouTube transcripts, and they build out all the logic for you. You don't have to do any of the math yourself. Um, what I'm going to do is I'll just type YouTube transcript. And then there are a variety of uh scrapers that come up here that say that they could do our job what we're looking for. But I'm just going to go to pricing models, go pay per result, just cuz you could rent scrapers. You could pay for usage, but in my case, I like to pay for the end result. I care most about the deliverable. So, how much money am I going to spend per transcript? And usually what I do at this point is I just open up two or three of these and then I just very quickly compare them. So, that's what we're doing now. Let's see. Um, this one allows us to extract one or thousands of YouTube transcripts fast. Save time and effort. Okay. JSON XML HTML. The reviews are pretty low, but this is $750 per thousand. That seems okay. Let's check out this one. Same idea, $10 per thousand. All right. This one, $7 Well, I mean, to be honest, seems like kind of a wash. They're all about the same. I mean, they all have one or two reviews. So, let's just scroll down a bit and see if I can get some information on what I get. Looks like they will return me a big list of all of the captions. So, that's cool. Um, is there one that just gives me the whole thing in one big block? Like that would be nice. This would be pretty nice. Yeah. So, include timestamps. No. And then I just get a giant list. Let's do that. Yeah. Clean transcript. Okay. I like this one more now. And then do I just get one big transcript here? No, I get the time stamps and stuff. Listen, I think the time stamps are valuable, but I think that the first run I'm not going to use that. I'm just going to use the without the time stamps. So, let's give these guys a go. $10 per thousand results. I don't know if this is going to work, so I'm actually just going to try it out on a YouTube video. Why don't I do it on one of mine? Let's just go to Nick Sarif. Yeah, let's do the prompt engineering video. And that's 53 minutes. This one's 40. I mean, like the longer the video is probably the longer the transcripts are going to take, but whatever. For testing purposes, this is probably fine. So, I'm going to paste in my own here. No timestamps. So, I'm just going to get like the whole thing in one big block hopefully. And then I'm just going to click save and start. And so, Appify uh the way that it works is it'll spin up a server actually in the background. So, this is now like a server somewhere on the internet that has been spun up that is now running this scraping script that this other person put together. And I'm basically going to be charged uh what I think is 1 cent if my math is correct. Um per video that I get the transcript for. So, obviously very economical for testing purposes. And then uh you just pay either a monthly amount or something else and then um they bill you. So, in my case, I use a lot which is why it's at 100 bucks so far. But yeah, let's see if this one works. And of course, sometimes it doesn't work. I mean, these are scrapers other people build, right? I mean, this looks pretty good. All right. Now, yeah, this looks pretty good to me. So, now that I have this, let's just export this result. Let me just see what this looks like with all fields in a Google sheet first. Again, my whole goal is I just want to verify, hey, you know, can I get the data that I'm looking for? If I can get the data I'm looking for, everything else is really easy. So, now I'm going upload and I'm just going to drag and drop this. And I'm doing it manually first and then we'll worry about the automating part later. We're probably going have to call some APIs, right? Okay. So, it looks like it returns the URL, returns the video title. Okay, that's cool. And then boom, we have the whole transcript. How many words is this? Really? Now I'm starting to think, okay, there's a lot of words. 8,000 words. Okay, so let me think about this. Usually people speak at about like 200 words a minute, approximately, 150, 200 words a minute. So if I were to feed in an hourong podcast, which is pretty standard. My content's kind of like that. I'd probably have like 10,000 or so words. That's a lot of words. Is AI going to actually be able to deal with this? So, I'm starting to think, all right, there are probably some edge cases here where I might feed in like a 2-hour long podcast and there's going to be too many words, too many tokens for the context window. So, I'm kind of keeping that in mind. But anyway, I'll kind of shelf that for now and we'll cross that bridge if and when we get to it. Um, obviously, I've shown that this works. So, what do we actually do now? Well, um, the way that Appify works is you can actually just get a web hook call like when the actor is completed, you will get a notification. There's also an API. Um, and I don't think that Naden has a built-in appy node yet, right? Okay. So, I'm just going to go to the Appify API. I'll go Appy API. And then, you know, API stands for application programming interface. Obviously, um if you guys are unfamiliar with how to use APIs and stuff like that, I got a bunch of videos where I walk you guys through what that looks like. But essentially, what I'm looking for, so I think I'm looking for run task synchronously and get data set items. I think I'm not 100% sure. This looks good to me. I mean, there's so many dang um endpoints on the lefth hand side that's honestly pretty difficult for me to say for sure what's what. I see a couple that look similar. Actor tasks, run tasks synchronously, and then there's run actor synchronously. Huh. Not really sure what the difference is here. Return output or get data set items. I feel like it's probably going to be get data set items, right? All right. Anyway, I think I think I'm going to do this now. Um that I have the API call here. What's really cool is in NAN, you can just copy all of this, right? So, I see there's this little copy button. I'm just going to click that. I'll go back here and then I'll go um HTTP request. I'll go import curl, feed this in. Okay. And it'll actually map the whole API request for me. So, it's already done all the work. All I need to do is I need to swap in my authorization token. And then I think I need to do one more thing. I need to feed in an input right over here. Okay. So, first things first, I'm going to get my authorization token. Now, how do I do that? Well, Appify probably has an API key thing somewhere, right? So, I'll go settings, I guess. Yep. API and integrations right over here. Let's create a new token. Let's just call this YouTube temporary cuz I'm just going to delete it afterwards. Do I want to limit the permissions? No, I don't think so. I'm just going to click create and see what happens. Uh, okay. YouTube temporary right over here. Let's copy this. Let's um let's delete a couple of these cuz odds are I probably totally forgotten to delete them on previous videos. So, I have so many. Anyway, uh I'll paste the token right over here. That looks good. And then if you think about it, like what do I need? Looks like I need an actor ID here. So, where is that going to be? That's probably, you know, most of the time actors will put the ID up here. Yeah, I think so. So, that's probably the ID. That's usually the ID for most of these services. So, I'm just going to grab this, paste this in, and then I need to feed in the actual website that I'm going to use, right? So, I don't know how that looks, but usually on Appify, they'll show you if you go to JSON, they'll show you what the data looks like. Okay, so check this out. Include timestamps, no start URLs website. So, what I'm actually going to do is just copy this. Then I'll go back to my N8 endflow. Sorry, been jumping around a lot. And then under body content type, I'll go using JSON. And then I'll just paste this in. Okay. So, this is fixed right now, right? I'm just feeding in one URL, but um I'm okay with that. I just want to test and see if this works. Let's see if there's any issue with my syntax or something. Let's see. And if there are any bugs, I keep all of them in the video so you guys could see what my thought process is. Uh it's taking quite a while to do, which I think is positive. If I go back to Appify, we go to runs. Okay, looks like it's starting the crawler. So, I've actually initiated the crawler using my API call. Looks like it is now done. Okay. If I go back here. Oh, nice. Looks like I got the data. Awesome. So, I have the transcript done. All right. So, I mean, that was really easy, right? Super easy. Very straightforward. Why don't I rename this and then I'll just call it get transcript via ampify. There you go. And now I can go back here and if you think about it, I could actually kind of like just check this first box. Okay. So, or check both of these box. Uh, actually, I'm not done that. I'm just done this. Let's make this really thick. There you go. So that step is done. So the YouTube podcast URL input step now. So if you think about it now, what do I need to do? I need to verify that I could actually get input in. Right? So in NAN, as you guys know, there a bunch of different triggers I could do. This one's just a test workflow trigger. What I'm going to do is I'll go back here and then what I want is I want um just a form. So N form and I'll go on a new form event. So what I'll say is insert a podcast get content. Hey, this is a AI podcast repurposing engine. If you insert a YouTube link to a podcast, we'll generate a bunch of formatted content for you and post to relevant social media platforms. Okay, what here I will say is YouTube, maybe podcast. Uh, let's just go YouTube URL, right? Field type will be what do we got? I guess we'll just do text. And then I'll say it's required. And then I think that should be good. Yeah. Let's now test this. Let's uh where was that URL a moment ago? Oh, here we go. Let's copy this link. I got it right over here. So, I'm going to paste this in now. Insert a podcast. Get content. Very cool. Going to submit it. Okay, cool. So, I can get the content, which is nice. So, now I'm just going to feed this in as my variable. Uh, I should probably keep the one clicking test workflow actually because that'll just allow me to test the flow really easily. But anyway, um, as you can see, I got the form submission. So, what do I have to do now? Well, now I'm just going to make this dynamic. I'll go expression. Then I'll open up this big thing in an editor. And then right over here where it says start URLs, I'm actually just going to feed in one start URL. It's going to be this YouTube URL. So, this is the result. This is what it's going to look like. That looks good to me. Cool. Automation is mapped. We are good to go, baby. Everything should be fine. Awesome. And I think what I'll do here is I'll probably pin the output as well just so I can always run this on the exact same um video. Okay, cool. All right. So now what we have done is we have submitted our form and we've also gotten the transcript. Now the next question is how are we going to generate content for Instagram, Facebook, LinkedIn? Then also maybe some timestamps or some hashtags or something like we just need some way to generate uh video ideas. Maybe we could even use h genen. That might be pretty cool, actually. That' be pretty interesting. Maybe I'll screw around with that. If you guys have seen the demo already, you guys are like, "Well, obviously he's going to use Hey Gen, right? " But I I'm not at that point yet. Um, okay. Uh, Instagram content here. Let's think about this. So, I need to now create um I need to have some sort of way to spin up three or four different model calls. Then for each route, I need to produce something. So, some Instagram content, some Facebook content, some LinkedIn content, some timestamps, hashtags, whatever. I guess what I'll do here is uh do they have a router here? No, they don't really have a router. So, I think I have to use a merge node. Yeah, I think I'm going to have to do this. I don't know for sure, but whatever. Let's do um OpenAI. So, go to OpenAI and then I'm going to message a model right over here. Then I have all my credentials already connected, so I'll just use the YouTube February 4th one. But, um you know, if you don't know how to do this, it's pretty easy. You just go like to your OpenAI dashboard and then you grab the API key and then you don't need the organization ID anymore which is nice and then just do the connection. So once I have this, let's think resource text message model. Okay, I'm just going to select a model right now and I'm going to make it like let me check model context windows. Um, open a do we have a list? I just want the one with the biggest context window right now to be honest. Okay, we got a couple. Let's just check. Let's just check all of them. Let's just compare all these context. Okay, there you go. So, I can see it says context window. So, 128,000 200,000 128,000 128,000. All right. Well, I mean, if you think about it, they're all 128,000. How many tokens is uh 10,000 words? 7,000. So, we should actually be good. I maybe I was a little bit ahead of myself here. We should be good. I'm going to use the GPT40 for now and then I'll figure the rest out later. So, um, yeah, let's go to that. So, I'm going to use GPT 40. This is the one. Let's go back to my init flow and let's just go 40. Zoom in a bit for all y'all. What I think makes the most sense at this point is we should probably have one model generate a bunch of things to talk about first based off the transcript. Then we feed those things to talk about to other models and then they'll take those items and then they'll use them to generate stuff. I think that makes the most sense. So you are a helpful intelligent um let's say content writing assistant that works with transcripts. What I always do is I start with a system prompt. Okay, system is just how the model identifies. And I find that when you make the model identify really good at something, it's very, you know, you're helpful and intelligent and you work with transcripts, it's just more likely to do a slightly better job working with those things. Next up, I add a user prompt. So, here's where I actually define the task. So, you take as input a long meandering transcript and you identify the most interesting, let's say, um, the 10 most interesting, engaging points. You then generate a JSON containing those interesting, engaging points in this structure. Let's do this. So, now we're going to go JavaScript object notation. And what I want to do is I want to give it um a good structure. So the first thing I'm going to do is I will say sections. Let's do that. Now I'm going to generate an array. Okay, we're going to start with this array over here. And I know this isn't like actually proper formatting, but that's okay. Now what I want is I want another object inside of that. Uh I'm sorry I was wrong where basically I generate this. We're going to have number and then I'm just going to put one. Then over here I'm going to say paragraph transcript. paragraph of the relevant part of the transcript goes here. Okay, this is actually getting really annoying. I thought I could like make this look nice, but I can't. So, I'm just going to go to JSON formatter. It's a lot easier. Just format it. It'll automatically take care of this for you. Okay. Uh, cool, cool. Let's just copy this and we can paste this back as the intro. So, number one, paragraph transcript paragraph the relevant part of the transcript goes here. So, basically, I wanted to clip a part of the transcript. Then I also wanted to like generate some something else. Description of section, a description of why this point is interesting and some direct and some ways to make it even better. And then in addition, I also I love um having AI do this sort of like meta stuff where you give it a piece of content and then it actually takes the does something with it like it provides critique. It comes up with some new way to do it better or something. And then I also want one other thing. I want like deep explanation and I'll say a one paragraph write up based on the transcript section that expands upon its points, clarifies any ambiguities, generally fills in the blanks. Okay, let's just run with that. I think this is going to work pretty well. So, this is going to be the JSON structure that it's going to generate, right? something like this. Is this an optimal or ideal prompt? No, not really. It's pretty lengthy to be honest, but that's okay. Generate 10 points. The transcript, okay, is below. I'm going to add another message. And here's going to be the user. And what I'm going to do here is I'm actually just going to feed in the transcript. So, we can't get it out. So, we actually have to run it one more time. So, let me just test this while this is testing. Uh, basically what I'm going to do is I'm going to put the actual transcript right over here. Then I'm going to have the assistant return the message afterwards. Let's go. Output content is JSON. Let me see if there's anything else I need. Temperature I always like to set a little bit lower. I just find it gets kind of too interesting. And then let's actually add some Let's put some rules down here. Write in a Spartan Conic tone of voice. Copy the transcript sections exactly as they are. Look for unorthodox or interesting ways to make. Let's change this to context and feedback um to make the content better in the context and feedback object. Cool. Now, what I'm going to do is I'm just going to feed in the transcript right over here. Let's actually feed in the video title, too. That'll provide even more context. Cool. And then, uh let's just run this and see what happens. This is a very long transcript, right? It's a long ass transcript. So, we want to make sure that the content that it generates is good. So because of this, you know, think about it from my perspective. I'm at the point where I'm trying to do this. I need to make sure that I understand what the video is about if I'm using it as a test and that I can meaningfully evaluate the output to see that it's good and not just like total make believe stuff. Now, this is very long. Because it's very long, it's obviously going to take a while to do. It's also going to cost a fair number of um input tokens. So, let's actually figure out how much this would cost realistically. Inputs $2. 50 per what is this? per million tokens. All right. Well, that's really not that big of a deal. I just fed in like uh 12,000 tokens or something. So, one 12,000 is 1 uh 10,000 is 1/100th of 1 million. So, 1/100th of this is 2. 5 cents, I believe. So, it cost me 2. 5 cents a that's not a big deal at all. Okay, we got the output. Very interesting. Very cool. I want it way longer. I don't like how long the um the section is right now. I think that we could do a lot better if it was longer. Yeah. I mean, these are just this is just like five words. Okay. Well, it's very interesting because I've uh I'm the one that created this content so I understand what I was talking about and uh it actually basically went through top to bottom and just extracted the various points I was making. It's like okay point one this point two that point three that. Make sure your paragraph transcript string is long longer than just one sentence. Try and capture at least one whole paragraph of the transcript. Okay. So, I'm just going to test this again. And while it's running, which is going to take a little bit of time, I'm going to um go next. And now that I have this, I think what we can do is we just have another three or four depending on the content. And then I'll just I'll paste a bunch in. So, this might be like Facebook, this might be Instagram, this might be LinkedIn, this might be another one. Then I'll combine them all with a merge node just into like one big object. Or hold on a second. Actually, what I think I'll do, we should probably add these to a Google sheet or something instead of just post them, right? Like, it'd be silly to post all these immediately. So, we should probably just add them to a Google sheet. What are you going to do? Post 10 pieces of content immediately on all platforms. You you'd need some serious nuts to do that. So, it's probably not the best move. Um, okay. Well, let me cross our bridge when I get to it. I guess for now, what I'm going to do is I'm just going to generate a bunch of content. So yeah, you can actually have multiple routes like this pretty easily that just stretch and then as long as you have a merge node at the end that combines the outputs, it'll just run them all. So I guess we could do this. We could post or we could just add all these to a Google sheet or something afterwards. Anyway, this looks good. Yeah, this is a lot longer. Cool. Nick introduced the first major hack. Okay, Chad GBT to write a story about peanuts. Cool. Cool. Um, all right. So where we at right now? We just generated the transcript that we've just generated something which we can use to route the content later. So that's good. So now we just need to go through our routes. So I'm just going to do an Instagram content route first, then a Facebook content route, then a LinkedIn content route, and then finally um yeah, we'll figure that out afterwards. So why don't I go over here and I'll click rename Instagram post generator. Now the first thing I'm going to want to do is I'm just figure out what the guidelines are for this. So, what are Instagram post length restrictions or something? Looks like we can write 2200 characters. We caption, we truncate the caption at 125. We have 30 hashtags. That seems pretty reasonable. So, basically what I need to do is I just need to shorten it and then say write under 2200 characters. How many words is that? 300 words. So, I'll just have it generate me like a short snippet like a paragraph basically. Two paragraphs or something. That sounds good. I'm back over here. So, Instagram post generator. Uh, what I'll do is I'm going to write a new prompt. Your helpful, intelligent content writing assistant that generates Instagram posts. You take as input information about a point. I just realized I'm going to have to change the structure here because we can't just feed in all this, right? a section of a transcript along with some observations about that section and some points of feedback and use it to generate clean, beautifully formatted Instagram posts in this format. We will do Instagram post. We will then do uh we're only going to do one post a clean beauty format Instagram post in this format. We'll do Instagram post and then since it's just one I think we can actually just go Instagram post copy then we'll go copy goes here. Then what I'll do is I will take this and maybe we'll generate an image with this as well and then feed that back. So write in Spartan Lonics and a voice. Copy any transcript sections. Uh no let's not do that. Instagram posts truncate after a paragraph write an engaging first paragraph and then context around the rest of the point underneath that paragraph. At the end of the post, add hashtags. Let's say five relevant hashtags. And then yeah, that should be pretty good. Just leave that there. Then I'll say right over here. Oh yeah. So I can't actually map this until I figure out the structure. Right. So basically what we're going to have to do um if you think about it is we just need to we need to loop over all these. So a variety of different ways you could do the looping. You could um uh we're going to have to aggregate this I think because it outputs one item as you see up here and we needed to output like a more than one item if we wanted to do the loop automatically. Also just in um historically uh if we hit all these up immediately and we just try and do 10 API calls uh simultaneously it usually just breaks the NAN flow because we hit rate limits and stuff. N doesn't have very good like built-in rate limits. So, I'll probably do the loop over item split in batches. If you've never used this before, the way this works is you feed in the item, and then what it'll do is it'll loop over all of that data over and over and over again until you reach the last item, and then it'll go down the route. So, yeah. Um, I think what I'm going to do is this replace me thing is about to be a replaced. So, I just want to make sure I can actually feed multiple routes into this. Can I? Does it work? Yeah. Okay, I should be able to do this. Cool. This is going to be a very complicated looking um system. Sure, it's going to sell well uh on YouTube anyway. So, I'm going to loop over now. And what I need to do in the loop over items uh node is I need to feed in just this array. So, how do I feed in just the array? Uh well, as input to this loop over items node, I'm going to use the split out. Yes, we need this. So, the fields that we're going to split out are going to be this sections array. We feed in the sections array. If we test this now, we should get 10 items. 1 2 3 4. Perfect. Now that we have these 10 items, we can actually feed that output into the loop over items. So, we're going to split out these 10 items. And then we're going to go one at a time, basically calling all of these APIs. Okay. Now that we've kind of figured all that stuff out, awesome. We can actually get going with the Instagram post generator. Let's click on this again. And then well we need to execute the previous nodes if we want to get all the data. So we have those 10 items. Now what am I going to do? Well I'm just going to feed in the specific item. So I will say transcript going to feed this in. Um oh we need to index the item now. Uh the reason why we have to index the item is this just doesn't know which item we're specifically referring to. What we're going to have to do is we're going to have to going to grab this first. It's not able to get the specific one, is it? I don't know. I guess we'll find out. Way that Naden does their items is um always sort of interesting. I did execute the previous node, so I'm not getting this preview, which is annoying. Let me go back over here. Yeah. Okay. Yeah. So you can uh it's just when you use the split in batches sometimes there's a problem with um the way that it's rendered. Anyway, uh cool. So we're just feeding in these variables directly one at a time, right? Because we only receive one item as an output. Cool, cool. So uh awesome. Well, let's just give this a try and let's see what happens. All right. I don't want to feed in all 10 items. So what I'm going to do is I will just test this out on one item. So, I think if I just click test tab, we're only going to run this once, not 10 times, which is nice. Okay. Unlock the full potential of GBD models. My top three prompt engineering hacks from my journey since 2019 with GBD2. Leveraging these tools in every business. I've gathered insights that will transform your approach ready from your tool to autonomous team player. Let's dive in. No, I do not like this. I think this is written pretty poorly. So, and points, no leading questions, emojis. Write like a business professor talking bluntly to his students. Let's try this one more time. except simpler. Favor words with fewer syllables. Cool. Let's try that one more time. Okay. I mean, this looks substantially better already, which is nice. Cool. Um, now we've generated an Instagram post. We can do uh a couple things with this if you think about it. We could also like generate an image with this. So, I don't believe we're at the point where the image generator that is being used is the new GPT4 image generator. I think we're still using Dolly. What I'm going to do though is I'll see if I could feed in the previous description, an image that represents the concept. Let's see if we generate an image. What's going to happen? See how trash this is. There are a variety of other things that we could do as well. Or we could like generate some branded stuff. Cute little kawaii anime like cartoon characters or something. That'd be sweet. Um, unfortunately I can't just use the I can't use the um OpenAI API like the awesome one. Yeah, I'm not a fan of this stuff. It's kind of trashy. Um, an image handdrawn cartoon style should have one character in the middle. That's all. Let's just try that. My prompt engineering has gotten substantially simpler over the course of the last few months. Let's put it that way. Uh unfortunately getting spoiled talking to these extraordinarily smart models. So when you talk to a dumber one, uh takes a little bit of time to get up and running. Okay, let's view this puppy. What are we looking at here? hip H Spanishpanic speakers. Um h handdrawn cartoon style. Let's just say handdrawn um cartoon. And let's go over here and have this just generate one additional object. Short image description. A one-s sentence description of an image that illustrates the concept. The description must have one simple character like a bunny or an animal. and B be catered to kids audience to a younger audience. Let's do that. Looks good to me. So now I'm just going to have to test this and I'm actually going to have to produce the outputs here because I'm then going to need to split them out and then loop them over the items and then do my post generation, which is nice. Give that a try. And this done route I'll probably end up putting underneath to be honest because this is going to be pretty chunky. Uh maybe I should just do all of this here. Yeah, you know what? I'll probably do it all over here actually. We'll have an Instagram post generator, open AI image. Or maybe we should just generate the open AI image before the Instagram post generator now that I'm thinking about it because then we can just use the Instagram post and all the other stuff that we need to do. H anyway, let's see how that goes. Just testing these one by one here. And then let's now test this. Oh, sorry. I used the wrong one here. What we want is we want um loop over items. And we want Oh, yeah. Sorry. I need one more piece of instruction. No text. Description should never talk about text. Okay. All right. Uh so what are we going to do here? We are going to handdrawn cartoon style. Then we're going to feed this in. Maybe we'll go colorful watercolor. Colorful soft watercolor of a bunny stacking colorful blocks. This isn't going to be ideal because it's going to say a bunny um with text in it. That's not really what we want the image to do. Let's turn on respond with image URLs. By the way, can we go style hyperreal and dramatic? No, we want natural. I just made some changes. So, um, that looks pretty cute. Yeah, I think we could probably do that. And then what we want for quality, standard or HD? H, no, we'll probably go with standard. And then we want resolution. Got a couple of different options here. Let's go Instagram post resolution 1080 x 1080. So 1024 x 1024 is reasonable. Like it's not going to be uh as pretty, but I think it's going to be pretty good. And yeah, this one has text in it, but like imagine we're just going to get rid of most that text in future ones. So that should actually be pretty fine. Maybe you have some branded channel that does something like this or I don't know. Uh like if you think about it, there's like three or four major styles that you could have AI generate, right? You could do like some sort of handdrawn stuff if you want to be serious. You could do if you just check out like Alex Hormos's Let's check this out here. Hold on. Yeah. Stuff like this. Right now, he's using GPT40 image to do that. But as I'm sure you can imagine, if you just have some style like this and it's like a standardized style, then you can generate like an almost infinite amount of content with a podcast clip. There's another one in Simpson style. I think he's publishing a ridiculous amount of content. I mean, this is like a week ago and he is like 50, right? So, that would probably be my thoughts. That's probably how I'd do it. Since we're using Dolly, it's not going to be as clean, but I imagine we're probably going to have access to that API pretty soon. Okay, so Instagram post generator. And then here, we will call this generate image. Now, this will also return an image link. So, if I generate this new one now, we should have an image URL, right? The image URL we can just feed directly into the Instagram post node. We're feeding in some additional parameters here. So, it's going to change how long it takes to generate. Looks like it did some revisions. Oh, that's cute. I like this. Yeah, nice. Okay, we have everything. We can actually go to I think what we need is the Facebook graph API. Could be where we make a post. Yeah, most likely. So, okay. I'm probably going to have to muck around with this for a bit before I can figure out exactly. And I think in order to do this, the credential that we set up, we have to get an access token which we generate from something else. So it's going to take me a minute to figure that out. And I'm just gonna allow that to be the last thing that I do. From here though, as you can see, we have a pipeline that we can use to generate everything else. So now I just duplicate these. Right. So Instagram post generator. Very cool. Let's go over here and then let's go LinkedIn post generator. Then down over here, let's go uh what else did we have? Facebook post generator. Now we just have to like very lightly change the parameters um the prompt basically. So instead of Instagram post, but we'll say Facebook postcopy, Facebook post, then no hashtags, Facebook post. Let's say Facebook postcopy guidelines. Okay, this is what looks like a landscape photo. So we're going to have to generate a slightly different image size for that. Okay. And then I don't really think there's any text restrictions. You probably go pretty long. Um, this one is not wired up right now, which is why we're getting that. So, let's go here. Oh, sorry. Was this the LinkedIn? Was I just editing that on the LinkedIn one? Probably. Oh, yeah. My bad. Uh, well, let's go LinkedIn post. Copy. LinkedIn post. And then this one here is LinkedIn. Anything else here say Instagram? I don't think anything else here says Instagram. We're probably good. LinkedIn post. Uh, okay. Well, what's the LinkedIn post guidelines? So, LinkedIn post. First of all, let's check the dimensions. So, it's widescreen as well. Let's see when it truncates. All right. So, honestly, this is very similar to Instagram. Realistically, I'm just not going to make any adjustments to this. This is going to be a good nugget that anybody could use to build out like more nuanced or higher quality systems, I would say, by mucking around with the prompt, making it a little bit better. And then here, we're going to generate a LinkedIn image. Um, just because this doesn't allow you to change the uh have multiple of the same titles, I'm just going to like add some acronyms here like Facebook image and stuff just so that we can have different um titles on them. This LinkedIn image is not going to be at 1024. Uh, we're going to have to make it like wider screen, right? So, sorry, I've already forgotten this one. LinkedIn post resolution. This is Well, actually, we can do both. We can do 1080 x 1080 pixels. So, actually, I'm just going to do 1080. So, yeah, I'm just going to do square. I think Facebook was the one that was wider screen, right? Yeah. So, I'm going to go 1024 x 1024. That looks fine to me. It was the Facebook one that I think was different. It was 1792. So, this is about as wide as we can get. It's not the best, but I think we'll just deal with it for now. Okay, I'm going to do that here. Okay, cool. So, now we're basically going to be generating three. And then we need to change this. Go LinkedIn. Do this. Create a post. Very cool. We got to add credentials and stuff. I'll deal with all that stuff afterwards. Um, I think that's basically good, honestly. And so, if you think about it, what we're going to do is one, two, three. We're going to have to do after this, we're going to have to merge all the outputs of the stuff together. By merging the outputs of all the stuff together, um, we're going to get stuff that we could put in like a Google sheet, for instance. Actually, maybe instead of us um doing the posting directly in here, we should add them to a queue. Hey, because if you think about it, like yeah, what are we going to do? Post all 10 posts immediately? No, we should just add them to the queue. So maybe we should do the posting and stuff like that in a different scenario or a module or workflow, I should say. Maybe for now what we do so we just merge all these outputs. We'll do three inputs. Thank you kindly. And this is number three. And now that we're merging these, basically what I'm thinking is we make a database of posts for all these different platforms and then every day or whatever, just go through and post. That way they'll be relevant to the previous podcast. And that logic is pretty simple to put in place and probably makes the system a lot more valuable cuz if you just have a fragile system where you put a form thing in and then it just forces you to post 10 times, I think that'd be kind of dumb. No, no other way to like verify that the posts will always be different. Yeah, I think this is what people want. Okay. Anyway, there's a variety of different ways you could do things here. We could just append. Oh, is it going to have to execute all the previous ones? Right, it will cuz we haven't generated the images yet. So, let's generate image three. That one's going to take longer cuz it's bigger if you think about it. The other ones were,024 by,024. This one was like 17 something. So, I think that's like mathematically it's not like a 1. 7 times, it's like a three times or something. Um, just because there's so many more like total pixels in the image. Okay, that's the LinkedIn post generator. Let's see what an example of this appending looks like. We should just get an object with like all of the LinkedIn, Instagram, Facebook, right? Okay. So, the output is we get three items. Yeah. I don't like the three items being all here because if you think about it, what am I going to do with these three items? I got one item here, one item here. Well, I don't want three items. I just want one item as an output. And then I want the one item to be like Facebook post, Instagram post, whatever. And then I can map them a lot easier, right? So, I'm pretty sure we're going to have to do the combine. I think I just want to combine all of them. Oh, can I only do two? I don't know what this last item is here. Yeah, it doesn't look like I can actually do three, unfortunately. Yeah, sorry. We could just use a set. That'd be way easier. Let's go set here. We're going to take in the previous image. Okay, I'm just going to do it all in JSON. So, I'll say image URL. Image URL is going to be right over here. Then right over here in the middle, um, we'll have the post body. It looks like I'm still outputting an object called Instagram post copy. Trash. That's not very good. Should probably go back here and then adjust that, eh? Yeah. Oh, you know what? I just left it as Instagram everywhere. My bad. You guys probably all saw that and were like, man, Nick is such a Um, it's true. I am. But the best part of it all is you can make mistakes. Just a little happy accident. Then you can go back here and you can fix it. I think I need to change the LinkedIn um object as well, right? Okay. No, I did. All right. Anyway, that's my happy accident. Cool. So, now that we are editing the fields, what are we going to do? We're going to have this actually be Instagram post copy. So, I'm going to feed that in. If you guys aren't sure why I'm doing this, um, basically I need a way that I could reference this later on. Just going to call Facebook post copy. It's not going to do anything right now, but that's okay. And then here we'll say um platform and here we'll go like Facebook. Okay. So I'm basically remapping stuff here so that I have the copy, the image URL, the post body, and then the platform. So now I'm just going to copy this Well, I guess I can't copy just yet. I have to copy the whole uh node. Then right over here, I'm just going to delete this. And then I'll connect this to my edit fields. Then now what I'll have is I just deactivated that, but um it's okay. We're going to go platform LinkedIn. That should say LinkedIn post generator, right? LinkedIn postcopy. Doesn't look like I can get that path back to the node cuz it's under Oh, right. There you go. Okay. So, we'll have the image URL, the post body, the platform here. That's good. Let's uh reactivate that. Then up top, let's copy this. Paste this over here. Feed this in. And then, if you think about it now, what are we going to get? We're going to get um the ability to automatically determine in subsequent nodes why uh sorry which platform the data is coming from. So, I'm just going to call this set Facebook. And here, I'll call this set LinkedIn JSON. IG JSON. Okay. So, now what's going to happen if we append these together? Oh, I think I need to do one thing. Um, invalid JSON. Odd. H. Well, let's do a little bit of debugging. Okay. Uh, we're getting invalid JSON because the new lines. Yeah. So basically what we're going to have to do is we're going to have to remove new lines. So we could just replace all instances. Can we just do new line like a reg x with a new line or whatever? Or we could just have it not generate any new lines in the initial data. Yeah, that probably makes more sense, right? Like one, it'll be easier, but two, uh, it'll make sure that our source data is as clean as possible. So, why don't I go over here and then under rules, we'll say generate your new lines as back slashn characters instead of full line breaks. There should be no actual line breaks, only n characters. Cool. This will work 90 whatever% of the time. It's not going to be perfect. Sometimes the model will probably misinterpret it. Maybe one out of 100 or something like that. or maybe one out of a thousand. Realistically, these models are getting pretty smart. So, still. Okay. So, now if I go over here and I'm just going to retest this step because I want it to output the thing with the known new line. Let's just see what it looks like. Cool. We do have the Awesome. All right. We should be good just to test this now. So, it's going to run three simultaneously and then these. I guess not simultaneously, iteratively, which is nice. So, we're going to minimize the likelihood of us calling one of the APIs and then screwing it all up. Then from there, we should be able to do our node. Uh, looks like we were not able to service the request. Why would that be? H could it be an API call? Maybe it got rate limited. Probably got rate limited. Images take way more of your rate limit than um anything else. So generally good just to the second that you have a working thing pin the response so you just never have to do this again. Also if you think about it how much more time is it taking when I do it? It's taking a lot more time. So looks like it had an error while processing my request. H not entirely sure where that error is coming from. It probably is a rate limit. Let's go open AI E3 rate limits. See how many of these puppies I can generate. Do I have coins or tokens? and I go doll E. Go images maybe. Let's go per model. Let's just see. Dolly 3 here. No, I should be good across the board. That's way more images than I need. I can do 10,000 images a minute, right? That's a lot. So, I don't know. Maybe I'm misformatting the data. Maybe I can't feed new lines in or something like that. Let's see. What is this? A bunny stacking colorful blocks labeled markdown CSV, XML, and JSON. No, that looks good to me. H. Too big. Could be too big. Or it could also just be a service outage. That's how I um typically do my debugging. Yeah, looks like there are some issues recently with Sora. I don't know if those issues extend back to me. Okay. Well, let's just try another module or another node then and let's see if it's the dolly or if it's just um my current approach with the Facebook node because the Facebook node is the only one that's had the issue that I've seen so far. So, I'm starting to think, hm, is it the Facebook node or is it just um Dolly in general? Fact that I haven't got an error yet, it's a pretty good sign that it's just the Facebook node. If it is the Facebook node, just got to ask ourselves why. Okay, no, it's not. It's actually just all of these dollies. Um, interesting. So, I'm not really sure what's going on with the image there. We saw that it was working a moment ago. Unfortunately, when you are working with the microservices economy, there are going to be situations like this with pretty inexplicable errors. Let me think. How do we proceed with this build regardless of the fact that there's some issue with Dolly? Well, I think what we'd probably want to do, hold on, let me change my API key just before I proceed. Really just throw it away. What we probably want to do is go through the execution history and then we can just pin the outputs and that'll allow us to continue regardless of the fact that one of the APIs that we're using might not be working. That's typically what I do. So, we just changed my credential. Haven't got an issue yet. Spoke too soon. So what I'm going to do is I will go to my execution history. Let's see the last good execution. Looks like the last good execution was not here. Let's do this one. What is the output of this? Looks like we have JSON that looks like that. So I'm actually just going to copy this. Okay. I just want everything. Can I just copy everything? Yeah, that looks good. Now that I have this, um, why am I doing it? Because I can actually go here. Uh, can I just, oh jeez, I don't actually believe I can pin the output of a broken node. Okay, so realistically, what I have to do is delete this. Go back here, pin this like this. There you go. Then go over here, pin this like that. And Okay, I'm now believe I could set everything else except for that pinned output. So I've just deleted the entire thing. So I have to go through the whole generation again unfortunately. But just part and parcel. And now um that we will have dealt with that we could actually merge the outputs and continue. I think when debugging it's important just to like keep a level head and note that um you know most of the time it's your fault you've done something wrong but there are some situations that are just pretty inexplicable and I wouldn't allow that to slow down the rest of your build. Like I'm kind of in my head I'm thinking there's a probability that this is some inexplicable issue that I have no control over. I could if you think about it just stop developing and then be like well I'm done with the system. This sucks. I'm not going to work on it and get really frustrated, but uh I'd rather continue developing a different part of the system and then I can always circle back to that at the end. I think that's an important principle of just systems in general. If something isn't working, take a breather and then focus on a different section for a little bit. Then you can always double back to that initial section that was causing you problems after you've sorted out the rest of it. Okay, so transcript is currently being pulled. I'm going to go back to YouTube transcript ninja. Looks like that will have just wrapped up. Cool. Looks like it did. We're now feeding it into OpenAI. The transcript currently being generated. Maybe it's just an OpenAI problem though. The entirety of OpenAI is down. That would be pretty rough. We may maybe we've been hacked. Got some spyware competitor that's come in and just destroyed the servers. No, they didn't destroy the servers. Okay. Uh All right. What in item zero contains valid JSON? Exactly. I'm not seeing anything. Looking pretty good to me, my man. So, looks like we have some issue here where we do not have valid JSON and JavaScript object notation. Just opening this up here. And this looks right. It's a new line thing again. Uh, why are we getting new lines here? It's not giving me any new lines. Oh, what's this? We have a quote. No, we don't have a quote. Odd. All right. Uh, well, I guess I am going to have to replace all Can we just replace all special curs? No, that doesn't count. Um, could replace all uh I don't know if I could just do a backslash. Could I? Let's see. So, uppercase any occurrences of blue or car. So, what do we have to do? I think we got to use the G flag, right? slashn. Then could I just go space? Is that going to work? I don't actually know this going to work. Probably not. Oh, yeah, it did work. All right. So, I just replaced back slashn with a space. So, basically, instead of these new lines, um, we just have a space. All right. Well, that's fine. I guess I'm just going to have to do this for everything. Oh well. Glad that you just throw some stuff at the wall and have it stick, huh? Me, too. Okay, so let's do that. Let's do that. So, this test is done. Let's just see this test. Oh, and I realize I should probably not be doing this line by line. I should probably be doing this all um at once. Okay, looks like we got the same issue here. So, what's going on now? The fact that we just can't get good JSON is worrying to me. Okay, it's simple. We just didn't have a comma here. Cool. We got that. And then what about over here? Do we not have a comma? No, we have a comma. Awesome. So, I'm just going to test the merge node now. I mean, we we're pinning the outputs of these three, right? So, it's just going to skip over this and then set the JSON. I'm going to run this. Skip over this. Set the JSON. Cool. All right. So, what does this actually look like in practice? We have three. We have image URL, postbody, platform, Instagram. Then we have another one that says platform LinkedIn, Facebook. So now you're probably wondering why the hell are you doing all this? Well, now hopefully it makes sense. Um because we have these three items and they all have different types, we can actually match the column based off the type, and then we can add it to a Google sheet. So I'm going to go sheets and I'll go add row to or append row to sheet. Um, air table is actually better to use um for stuff like this just cuz uh otherwise um rate limits and stuff can be pretty rough. Let's just go new sheet over here. What I'm going to do is I'll call this uh my AI podcast repurposing engine content calendar. Maybe we'll just call this our content. Ah, let's just do that. Okay. And then what am I going to do? Uh well, if you think about it, we can actually map this, right, with an expression. Yes, we can. Perfect. So, so now what I'm going to do is um inside of my Google sheet, right, which we've now done this, done that, done that. We haven't done that last part yet, but I've done this, done this. We're now just combining all these. Um if you think about it, what I need now is I just want like content. So, what I'm going to want is um I'm going to need some sort of like date added. Then I'm going to need post body. And then I will go post image. This isn't going to be perfect because sometimes you're going to have to download the image first, but we can deal with the downloading um on some platforms later. And then I'm going to go Facebook. I'm going to add another sheet which is going to be called Instagram. And then finally, we'll have one called LinkedIn. Okay, we'll paste all these three in. All right, so now uh if you think about it, the document, sorry, the document that I'm going to do is fine. Uh the document is just going to be this document. So I can actually just grab the ID in the document, which is positioned up here. And then I can just pop that in. The sheet though, the sheet is what's going to change depending on the platform. Okay, so the sheet, if it's Facebook, we'll feed in Facebook. If it's Instagram, we'll feed in Instagram. if uh it's LinkedIn will feed in LinkedIn and then it'll automatically find the specific one to do. Now there are three columns. There's date added, post body, post image. Post image post body is right over there. Post image is the image URL. Okay. Then the date added uh is just going to be date. So we should just be able to go Can we go dollar sign now? Yeah. I don't really like the way that that's formatted, though. So can we format this differently? Uh let me see. Hm. Let's see here. Um, how should we format this? Can we do day month year or should we go year year month day day? We do that. Oh. Oh, the formatting is a lot easier than I thought already. April 9th, though. Do we just go DD, I guess? Yeah, we'll just go DD. That looks good. Cool. Um H. Yeah, that should be okay. So, let's test this now. Oh, boy. That's a fat ass transcript. Wrong one. My bad. Uh, what did we have there? We had Instagram. Okay, cool. So, we just had three Instagram posts. And should we have three Instagram posts? I don't think we should have three Instagram posts. All right. So, I feel like some error occurred there, right? Because we should have three items each with their different platforms. But what ended up happening? Looks like we fed all of these just to Instagram. So, this is Instagram right now, but it should be dynamic, right? It should change depending on what we are putting in. Um, well, that's annoying. All right. So, slight issue with the recording there. Um what ended up happening was for whatever reason when I was pumping the data through that dynamic remapping flow with the merge uh just it just didn't work. I think it has to do with the underlying way that the nadn node functions. So anyway completely unrelated issue but my recording just stopped so I had to restart this. Basically what I ended up doing was I just hardcoded the logic in the sheets. So if I go to the first sheet for Instagram so you can see here it is hard coding the sheet of Instagram. Okay. If I go to the second one here for LinkedIn, it's hard coding the sheet for LinkedIn. And for the third, if I do Facebook, then it's hard coding the sheet for Facebook. Is this his most elegant solution? No, not really. I'm kind of annoyed that I have to do this to be honest, but is what it is. And I don't really care too much about the elegance of a solution. I care more about just like whether or not it works. So, testing this now. Going to my sheet, which is over here. If I go to the Facebook post, um, as you see that one populated, then Instagram and LinkedIn, because this is happening sequentially, not all at once, you know, we got to wait a little bit. LinkedIn there. Then finally, Instagram over here. And uh yeah. Yeah, this is more or less how I went through and I solved the flow. And then at the end here, let me just recreate this for you guys. Now that I'm done it, I want some sort of user input. of, you know, experience where the person that submitted the form knows that the form is good to go, right? So in NAN, you can actually add this over here as a form ending. So you can actually generate a form ending. You could say, "Congratulations, your content has been produced. " Then I'll say, "Check your content calendar for the specific posts. " Then maybe I'll even link the content calendar here. You know, you can imagine a client experience that'd be a lot simpler and easier for them to see. Okay, cool. So, yeah, that's the flow in a nutshell. Um, the thing is this is just the first part of the flow. He was like, "Are you serious, Nick? You're an hour and is the first part of your flow? " Uh, yeah, but the second part of the flow is really simple. But we just actually do the posting. So now that we have like our asset, which is basically just a list of posts, what we have to do is we just need some sort of logic that checks all of these once per day. Then it basically sees, hey, has this thing been posted yet? So I've added a posted on column to double check for that. Then if it hasn't been posted yet, it'll just go and it'll add a posted on column. It'll say, hey, uh, this was added on the n posted on the 10th. If you think about it, if we have a bunch of these and you know we have some columns, let's say 9th, 9th, but then this one's empty, um what we're going to do is we're just going to filter to only look for the ones with empty and then those are the ones that we're going to fill in the next day and actually go through and post. So in this way, we're going to have like a dynamic um a dynamic tracker basically. Uh that doesn't make much sense to you right now, don't sweat it too much. Let's actually go ahead and uh let's just build out the second half of this. Okay. So, I'm going to click um back to my home and then I'm going to add AI content repurposing engine 2. Just change the title to two. And then what do I need to be the um start of this? Well, if you think about it, I'm probably just going to do a test for now. Well, actually, I should do a schedule trigger. Let's just run this once every day. Okay. Do I want to post this at midnight? Probably not. Let's do 7 a. m. or something at minute zero. So, we got a bunch of data in here. Okay, which is nice. And then that supposedly is going to initiate our flow. What do we do next? Uh let me just pin this. Well, now we got to uh look through the Google sheet. So we got to filter and we got to post a bunch. And so the way I'm going to do that is I'm going to get rows in sheet. Okay. So I'm going to connect my credential. What is the document I'm going to be using? Uh well, AI podcast repurposing calendar. And we have three. We have Instagram, LinkedIn, Facebook. Okay. So I'm just going to go the first first. The filter I'm looking for, if you think about it, is I want to see if this column posted on is empty. If this is empty, then I want to return the value. Okay, so let's just test this really quickly. Doing a call, we've returned one because posted on is empty. But what if I said x is posted on x? If I click test, I'll say, oh no, no output data is returned because there's no um content with posted on equal to x. So I just verified that my filter worked right there, right? Easy peasy lemon squeezy. Okay. Now, another thing we have to think about is, well, we've just done that once with um the Instagram stuff, but we're going to have to do this again with the Facebook and LinkedIn as well. So, I'll say check Instagram posts. This one will be check LinkedIn posts. This last one here will be check Facebook posts. And here, this one has to be LinkedIn. Facebook. The column logic should be the same for each. Let me just make sure. Post it on. Good. This one should be posted on. Yeah, good. Okay. So, now that we have this, we have everything that we need in order to go and do the posting. So, if you think about it, what we're going to have to do now is we're going to implement posting logic. Post on Instagram, post on LinkedIn, post on Facebook. Simplest way is obviously 1x per day, but you can change this to be whatever you want. That's what I get for not using my pen. And then um after the post, what are we going to do? We're just going to mark it as done inside of our Instagram post and then LinkedIn post and then Facebook post, Google Sheets. And then at the end, I'm probably just going to merge them together again. And then that's it. So the question is, how do we actually go about posting on these platforms? And that's a great question. Let's uh let's go through and let's figure this out. Okay, so I just did a bunch of authentication. Now, this authentication in NAND is non-trivial. It is honestly pretty involved to get through. Let me walk you guys through what I did. I just don't obviously have the ability to share all my access tokens and stuff like that, but I'll still run you guys through what I did and walk you guys through the workflow. So, essentially, I mentioned earlier we have that schedule trigger, right? And we're checking the Instagram, LinkedIn, and Facebook posts. And then we sort of have three routes here. The first is Instagram. And so, the way that the Instagram route works is what we need to do first is connect to a graph account. The graph account is just the way that Facebook deals with all their API calls. I'll talk about that specifically, but first, if you guys wanted to set this up alongside me, if you guys didn't have the template, which you guys can obviously get in Maker School, you would have to go through the following. The host URL would just be default. HTTP request method would be post. Graph API version would be 17. The node would depend on your Facebook or your Instagram account ID. I'll cover that in a moment. The edge would be media. I set ignore SSL issues to false. Then underneath this, we'd have two options. There'd be a caption option with the body of the post. there would be an image URL option which I actually just hardcoded here um as like a uh I don't know a silly image because I just cuz I wanted to test it out a couple of times in my account to make sure that it worked first. But I can actually go through it and I can fix it um afterwards. Okay, great. So yeah, that's all of the stuff that you need. The node the way you get that is you go to business. fas. com. Obviously I'm posting this on a business account. Then you go down to settings and then you have to do is go to Instagram accounts. Right next to Instagram accounts you have the ID of the Instagram. Okay, so that's where you would go. That's the very first place that you would go to get the ID of the node and yeah, just make sure the edge is media and so on and so forth. To actually connect this to a graphic account to actually create a credentials, as I mentioned, quite an involved process. What you have to do if I go to the documentation here. So you have to first make a meta app with the products that you need to access and my recommendation is just do all products. Okay, so I'm just going to open up a bunch of tabs here as naden guides me. What I ended up doing was I made one called nick_rive_na_post machine, but I'm just going to create a new one here just to show you guys where it's at. This is where your I don't know um app name is going to be. Then your use cases. What I always uh what I do is I just do um other. And then it'll ask which business specifically you want to work with. I have no idea why it's in Spanish, but it's in Spanish. and then your app name and then the app contact email and then your business portfolio. You would just select the business portfolio that has access to all the other stuff. Now, I mean, I'm pretty good at all this stuff and the way that Meta and Facebook does all of their different business portfolios and ad accounts and ad managers and stuff, that's still like really crazy um to me. And you know, I'm somebody that works with technology like this on a daily basis. So, don't feel out of the loop or don't feel incapable if you guys don't know what that means. Um, it took me a very long time to figure this out. An embarrassingly long time, I should say. Uh, I'm authenticated through SMS, so I just had to get myself a text message. And I just confirm this. After you're done with this, you will have an app. It's going to take a second. Um, and they're going to verify the hell out of you. Okay, great. So once you're done with that, um you need to So I'm going to set up both Facebook and Instagram at the same time, but then you need to set up um obviously the Instagram. So click setup here. Oh, sorry. Before we do all this, actually, we need to do two things. Go over to app settings basic. Then what you need to do is you need to enter a privacy policy. So I just entered this as my privacy policy. Okay, so that's number one. Next, go to tools, then go to graph API explorer. Then what you need to do, okay, is you need to generate you need to go to the specific app that you just did. So, in my case, NAN access. Then, under permissions, you have to add, go to other. Um, I mean, I just added all of the permissions. I think you would be smarter than me and maybe just add the ones that are specific to Instagram, but the way that I typically do these things is, um, I just scroll through and then I find anything related to Facebook or Instagram. Then I click okay. So, as you see here, this very helpful bright red bubble, it's assisting me. Um, I don't do any of that. and Okay. So, now once I have all these and you click this um generate access token button, you're going to have to sign in to your account again. You can opt into the current applications. So, I'm just going to do all of them. I'm Then the application that you just created is going to uh request centrally access to your account. Once you have that, you will have your access token up here. Then you're also going to have a bunch of Instagram permissions. So access token is what you want to copy. Okay? And that's what you go and paste in here. That access token is that big fat beautiful access token. Um, and I just realized this probably isn't going to work now because I'm I have a bunch of different settings with my other access token. So I'm actually going to go back and I'm going to put in my previous access token. Uh, where am I here? Let's go back to this one. I'm going just copy that. Go back here and then paste that in. Let's save that. Okay. Anyway, once you have the access token for the specific thing you want. So, in my case, oops, I'm doing it again. It says nadn access. Go back to developers. fas. com/apps and then go back here to the main app. For whatever reason, it duplicates your apps if you add a portfolio like I did. Then uh down where it says Instagram, you can go to settings and then where it says API setup with Instagram login. This is when you would add your Instagram account. We've now just given it access to everything. Still getting insufficient developer role. H why is that? Not entirely sure. It might be because of this. Yeah. So, you need to make it live. Then you click allow. The app will now have access to your Instagram. Beautiful. And once that's done, you just take that GraphQL or Graph um API token, access token, I should say, feed it in here, connect it, then you're good to go. Okay. So, after that, what you do is you feed in the post body and then you feed in the image URL as I mentioned earlier. Now, um I'm actually going to fix this right now. So, I'm just going to test this. Pull some Instagram posts. Okay. Now, as you can see, we have the image URL, which I will feed in right over here. Let me just make sure I can actually see this. Uh, no, we can't. Right. The reason why is because um, OpenI will automatically time these out after a while. So, I can't actually see that image, which is unfortunate. What we need to do is we need to download it and upload it um, again. Okay. So, I'm just running the new image generator just so that I have access to all of the um, new images. Otherwise, OpenAI will time out the images if you haven't opened them or accessed them in a while. Looking pretty good to me. And it looks like now this is working as opposed to before where I wasn't. So, that's just a good example. Focus on solving the problems that you can solve at the time of the development. Um, you know, I just went and took my attention elsewhere and then whatever problem that the API had is now resolved. Okay, so now we're adding stuff to the sheet. There's the Instagram one, LinkedIn one, and the Facebook one. Let's just access the image here. Look at that. That's really interesting. Fascinating. Here's the post image. Thank you, rabbit. Here is the other post image. Wow, that rabbit is having a go. I really like that. That's cute as hell. Little tongue is out. Okay, great. Um, so now we've added that to the sheet. So now if we go over here to the other uh scenario, what do I want to do? I just wanted to test this. So I'm just going to test the Instagram post. Pull in the new Instagram post here. Looks like we got that one post image. Beautiful. Let's now upload that one to the Instagram. So, let me just feed in post image here to image URL. There we go. Oops. So, do not delete the image URL. That's not what you want to do. All right. Should be good. I'm going to test this step. It's now executing the node, meaning it's uploading. And what happens? It returns an ID. What does the ID do? The ID is actually what allows you to take something that you uploaded and then post it afterwards. Okay. Now on the post on IG node, what you need to do is you need to reference the specific page that you're using, okay? Which is 1784144. That's uh the data that we got previously. And then yeah, the rest of these settings, I'll just leave you guys here to uh to take them. But the main one in consideration is creation ID where you paste in this ID here. So I'm going to post this. We're going to get good output, which is nice. Uh guess I need to like go to that, right? Yeah, let's view this on Instagram now. So, I just posted my little bunny rabbit live. Be the first to like this. I'm just going to delete it because that's on my actual uh account. But hopefully you guys can see what that flow looks like from start to finish. Pretty easy. Lemon squeezy. Uh then we're going to check the LinkedIn posts. HTTP request and then publish to LinkedIn. Now, you're probably wondering, why do you have to do that? Well, the reason why you need to do this in the LinkedIn row is because uh LinkedIn actually needs the image file itself. Uh so let me test the im the LinkedIn route now. So I'm click test. It's not enough to get the URL like we had before. Okay. What we need to do is we actually need to get the image file. Now just because I don't want to go through a bunch of annoying stuff. Um what I'm doing is I'm just getting the image file from post image right over here. No authentication. I click test step. Now it's actually going to go and it's going to redownload the image. So I know it's a fair amount of bandwidth going back and forth, but now the image is in nad. Then the LinkedIn module works pretty easily. All you need to do to create a connection is you just click on this button and then uh if you wanted to create a new one. Let me Just go to standard and then click connect my account. It'll actually just log into your LinkedIn for you. Okay. So you just accept that LinkedIn and then you're good to go. So I'm just going to close this and go back to my first account which I think was this one hopefully. And then the resources post operations create post is organization. The organization you are in. This is interesting. But basically, if you go to your LinkedIn account and then you go to the pages that you have control over right over here. Give that a click. The URL is going to be the ID of the page. Basic. You see that up there? That's what you're going to want to paste down here. The text in my case is just the post body, the image category here, and then the input binary field will just automatically pull from the previous one. So, if I now map this to my LinkedIn, if I drag this over here, you'll see I now get a URN li. So, if I then refresh this, I will have my little bunny rabbit having been posted with my content, which is cool. All righty. And then, uh, what's that last one here? The last one is the Facebook route. So, let's test this out. So, I'm just going to pull all the data. I have the post image as per usual. If I want to publish this to Facebook, same flow that I had before. Okay, we connect the Facebook graph account, but here are the details instead of the node ID. So first of all, HTTP request is a method. Graph API version is 17. The node is me instead of the node. Um, and then edges photos. This is false. Message post body then post image. Okay. When I post this, what's going to happen is it'll go on my Facebook account. It'll go and it'll create the post ID. So, uh, where the heck is that Facebook account? I don't really want that to be posting. Let's view this on Facebook. see that new little bunny post I made and then uh oh, how do I actually get rid of that? That is the question. Think of all my fans. They're going to see the bunny. They're going to be like, "Nick, what the hell's this bunny all about? " All right, we just click on this and then should be able to delete it. I think it's Yeah, there you go. Cool. So, I've just proven that this works essentially. Um, feel free to trust me. Uh, what's better than trusting me is actually going out there and doing it. Now that we publish in all three, what do we want to do? Well, if you think about it, we now want to update that last uh that last record that we just gotten. And then we want to just write posted on. have that date. Since we're doing this once per day, what I'm going to do is the operation is going to be update a row. This one here, the one we are going to update column that we're going to match on is going to be let's do post image. I'm going to do is I'll go back to check Facebook post. I'm just going to match the post image to the same post image that we had. The only difference I'm going to do here, everything else will be the same. Okay. Only thing that I'm going to actually meaningfully change is posted on. So, I'll just I'm just going to map the rest of these fields in. And what I'm going to do is I'm only going to update a postit on so that it doesn't show up in the next search. How do I do that? I'm just going to go back to the formula where I got the exact formula for posted on. There you go. change that to an expression. All right. And then instead of check Facebook post, this is update uh update Google Sheets DB. Okay. Oh, I should probably do that one more thing here. I should call it Facebook. There we go. All right. So, that's the Facebook one. We'll go over here now. Connect this to the LinkedIn one. We got to change this to LI. And all of this data is going to be different as well. Um, I'll change that in a sec. Just I really like being able to quickly and easily map this stuff out by copying and pasting it. So, just now going to move this to Instagram. Then I just want to rename this to Instagram. Cool. And I basically just have to go, you know, unfortunately I have to go through this um this rigomeroll again. So, let's test this. Let's pull that out over here. Yeah, I can't, you know, actually need to execute the previous node. It's kind of annoying. Whatever. Let's give it a try. Look like all my LinkedIn fans are going to have to wait. All right. Uh, post image was right over here. Date added One more time. Then post body was right over here. Looks good. And then posted on format looks good. That's fine. And then let's test this now. Should update good for this one. Just test everything first. This is going to error out, but it's okay. I'm going to get the Facebook post. Then we can now update the post image. One thing I don't like about N& N's interface is that uh the expression field covers the subsequent field that you're working on, which is unfortunate. Anyway, give that a go. Cool. And then this one here, I've already verified that works, I believe. Cool. And then now, if you think about it, if we go back to our Google sheet, we now have a fully kind of like self annealing system. The system just checks to see when the posted on date was last and then it'll just go through once a morning and check to see which one to post next. So you can generate 10 new AI podcast posts and then you can go through and just check these off one by one automatically which is pretty neat.
YouTube Video Trend Detector
All right, you now have a content repurposing engine again that transforms long form content into multiple social media posts automatically. We're now building a YouTube trend detector that automatically monitors channels in any niche. It identifies videos that are going to be performing way above average. Then we're also going to hook that up to an email system to send daily digest emails with trending opportunities. The whole idea is to give content creators and agencies the ability to jump on trends early, then ride the wave of viral content. You can easily charge over a,000 bucks a pop for this system because it provides genuine competitive intelligence. And it also replaces a couple of very popular services out there like Vid IQ and one of 10. We're just rebuilding all of that in our run back end. Okay, let's dive in. All right, so this is future me doing a demo of the system. I've gone through a bunch of rigomeroll in order to get this put together and you guys are going to see all of that. In a nutshell, this is going to be two separate workflows. One to add or update new trending videos and the other to take everything that you've added and updated and then to send it in a nicely formatted email that I'm calling the daily digest. So, if I click test workflow, the first thing that's going to happen is it's going to pull from a Google Sheet database of channel IDs. It's then going to grab YouTube videos from the YouTube API before dumping those into the Google sheet. And then what we're going to end up having is just a list of new videos here alongside view counts. Now, I'm doing this for a couple of channels, but essentially after we're done with this, this YouTube trend detector can then turn on. And when this happens, what we're doing is we are then subsequently reading through this on a schedule, maybe once every couple of days or something. If I go to my Gmail, you'll then see that we now have a list of high quality videos over certain multiples that are then organized really, really nicely for us. And you know, we put in the channels that we want to track ahead of time and so on and so forth. But yeah, this is more or less like a simple and easy way to do things. I'm going to run you through exactly what the logic for this looks like. Maybe if you wanted to extend it, I don't know, build a website doing this, recreate one of 10 or whatever. Okay, so let's do the live build. Okay, so let's start with the live build. Here's the current road map and what I'm thinking about how to get started and then finish this. What I'm thinking is we're actually going to divide this into two separate flows. The first flow is going to be the ad and the update flow where we're actually going to grab the data directly from YouTube. And then the second flow is going to be the daily digest flow where we basically just send a summarized version of all of the trending content. And in this case, I'll just use an email, but in reality, you can think of this as being deliverable through more or less any means that you want. You could do like a Slack update. You could do SMS. You could spin up a beautiful user interface. You could have a website. And I'll run you through each of these in kind, but I just wanted to mention that I got the initial idea from Leonardo Gregorio. He showed me a trend detector that he was using to identify AI and automation related content to find trends that he could, you know, jump on trends. And he's taken a very sniper rifle approach to all this stuff. The guy's grown from basically zero subs all the way up to 20K extraordinarily quickly, much quicker than I did when I started. So, um, he developed this idea of a YouTube outlier detector based off multiples. Um, and I believe he got the his idea from this website here, one of 10, which basically does all this stuff in the background. And it's like a SAS product. And the idea is what we're going to do is we're going to rebuild or recreate a lot of the same functionality of this app. And then Leonardo's, except instead of using, in his specific case, he used I think it was like SQL. I'm just going to do it all inside of a Google sheet just cuz I think SQL is kind of scary and intimidating to a lot of beginners. And I just want everybody to like I want everybody to have as simple and as easy and as straightforward a time as humanly possible with this stuff. I personally don't really think we need to use SQL for it. So with all that said, here is more or less what I'm thinking. For the add or update flow, we're going to start by getting all of the videos for a specific channel. So basically, we're going to have to add a list of channels that we're monitoring. From there, we're going to grab the individual video data using the YouTube API. And that's just how the YouTube API works. You can get all videos in one call, but then you don't get a lot of information about each video in that call. you just get a list of IDs. The second step here requires us then to ping each individual video to grab data like views. What I'm going to do next is I'm going to filter all long form videos. So you know how YouTube you can do shorts or you can do like longer videos like my style. Well, you know, we kind of need to compare them apples to apples. So I'm just going to filter out shorts. Unfortunately, there's no built-in way using the YouTube API to do this. So I'm going to do a heruristic or sort of like a proxy for shorts. And I'll run you guys through what all that looks like later. Then I'm going to check if it exists in the database. Database being the Google sheet here. And that's just a fancy term for that. If it doesn't exist, we're obviously going to add it. If it does exist, we're going to update the metrics and stuff like that with the new view count because presumably views change. And then once we have our little database set up like our Google sheet, what I'm going to do is, you know, once a day or once an hour or I guess just however often we want, we're going to send over some sort of digest. And a digest again can be anything. In my case, I'll just do a quick little email just cuz I think that's the straightest line path. So what's that going to look like? Well, because I'm using a Google sheet, I'm going to store all this data on different sheets. So, I'm going to grab all the sheets. Then I'll grab the videos in each sheet. And then for each video list, I'm going to calculate the average number of views. This is sort of how you determine the multiple or how trending a piece of content is. You compare the view count of a specific video against the average view count of all of the videos. Then for each video, we're going to determine the multiple on that. And then if the multiple is over the threshold, we're going to include in the email. So, I like this idea because if we combine these two systems, right, we have something on the left here that's automatically updating the metrics and then something on the right here that automatically checks to see if a multiple is below some threshold. Presumably, these two things are going to make the system evolve and be dynamic. The videos that come in on day one aren't necessarily the videos that are going to come in on day two. And far from being like a negative of the system, I think that's actually a positive cuz sometimes videos get rediscovered later on. And I think that if you want to really assess the performance of a video, you can't just look at everything static like, you know, today or tomorrow. You actually have to look at it as it evolves over time. All right, so that is the whole idea here. Let's actually jump in and build this puppy. So, I got my little YouTube trend detector here. I was just doing a little bit of um wireframing beforehand to make sure that like the YouTube API worked and like logically I could actually hook up my credentials. But aside from that, this is going to be entirely lively build. So, I'm going to create a new Google sheet here and I'll run you through how to do all the connections and everything like that you need as well. Let's uh remove that. I'm just going to call this like YouTube trend tech. Let's just say database. Okay. All right. That seems pretty solid to me. Uh what I have to do is I have to connect this database now. So what I'm thinking I'm going to do is you know how I mentioned we're going to have a list of channels that we're monitoring. So the very first thing is I'm going to make a table called channels. And over here I'll just have it say channel ID on YouTube. What you do in order to get all the data about a channel is you need their ID. And if you're unfamiliar with how that works, I'm going to go over to my channel here. You can grab the ID of Sorry about that. a YouTube channel just by going uh I think more and then all the way down to the bottom share channel, copy channel ID. Okay, most people now use like little acronym versions like I do at Nixxive as opposed to channel ID. So you can't grab that through the URL for a lot of channels, but if you find yourself in that situation, you can get it from there. Okay, so what I'm going to do is I'm just going to test all this stuff out on one channel because, you know, that's really all that matters to me to start and then once I've tested it on one channel, then I can worry about dealing with all the other channels and I'm just going to brainstorm everything that I'm thinking about live. so that even when I do end up in a detour in some sort of crappy hole, you know, you guys will see how I do the debugging of this as well. So, first thing I'm going to do is add a trigger where when I click this test workflow button, it runs the flow and that's pretty simple. Second, I'm going to use a Google Sheets node. And what I want is I just want some way to grab the data. So, I'm going to use the get rows in sheet node. Here, I have the ability to add my credentials. Now, if you haven't added credentials before, I'm going to show you how to do it for YouTube in a second. In Google Sheets, all you do is you click oath 2 and then click sign in with Google. Okay, very straightforward, very simple. I've already done this, so I'm just going to close this out and then select the credential that I have, which I'm just calling YouTube. The resource is going to be sheet within document operation get rows. And then the document that I want, it's going to be this one I just created, YouTube trend detector database. The sheet that I want, if you think about it, is channels because I'm just going to select from this list of channels that I'm monitoring. And that's how we're going to build the flow. Okay, then I click test step. Okay, what am I doing? I actually now have got the data from the Google sheet into NAD. So, we are good. Next, I'm just going to pin the data. And the reason why I'm pinning the data and I always recommend pinning Google Sheets steps is just because you know when you turn it from green to purple um instead of having to do the API call to the Google Sheets API again what you can do by pinning it is just like cache or persist the data directly in NAN which means that for all future runs of this like if I want to test the workflow it actually just automatically grabs that data and then runs it through. I don't actually have to like physically make a request to Google. The reason why this is valuable is because they tend to be very fragile these APIs. So, if you always test every 3 seconds like I normally do, I'm very um incremental with my testing for good reason, which I'll tell you about in a minute. You know, sometimes the API gets overwhelmed and then you end up just having to like wait like 5 minutes. Who the hell wants to wait 5 minutes, right? Okay. So, next up, now that I have the channel ID, if you think about it, I kind of want to grab the video. So, I'm going to go YouTube right here. And uh there are a lot of different functions I grab a channel, get many channels, updated channel uploaded, channel banner, playlists here, playlist items. Okay, so what I want is the get many videos. Now you see it'll say credential to connect with YouTube account. So I've already done this but I'm going to pretend that I have an set it up from scratch for you. Okay. So when you click add connection it'll say oath redirect URL and then you'll grab the URL here and it'll have this little callback thing. Don't worry too much about this. This is just like a way that it opens the window up in NAN. What you need to fill is this client ID and then this client secret section. If you don't know how to do any of this stuff, just click open docs. Naden actually has pretty good docs on how to get up and running with like service accounts and whatnot. I'll run you through what this actually looks like. What you have to do is you have to go like console. cloud. google. com just like this. And then what you have to do is you have to make a project. Now I've already made a project. So I'm at my first project here. Okay. But what making a project does is you basically just give it a name. So as you see my website here's leftclick. I basically just gave it a name and now it says my first project. What you have to do next is you have to go to APIs and services. Then what you have to do is you have to go YouTube. And when you go to YouTube you'll find the YouTube data API. In my case it's V3. Maybe you're watching this video in 2027 after the robots have won. So uh maybe you are a robot. in which case, please spare me and my family. Those will be a this might be a V something else. Okay. Uh you're going to want to click like I forget what the verbage is, but I think it's like, you know, add or enable or something. Once you're done with that, if you go to manage, then you'll go down to credentials over here on the right hand side. And then what you what you'll have is you'll have two sections. You'll have like um API keys and OOTH 2. 0 client IDs. Now, I've actually already created my own credentials here quite a while ago for uh for YouTube and whatnot. What you can do is you can go ooth client ID and then here you actually create your own. So what I'm going to do is was it web application? There we go. And then the name will be whatever you want. Whatever I want. Okay. Um under authorized JavaScript origins and authorized redirect URIs. We're going to go back to here. Go back to the YouTube or other Google specs. And then what we want is we want this OOTH single service. So now it's going to walk us through all these steps. Figure out OOTH consent screen. Let me see. From your NAN credential, copy the OOTH redirect URL. Paste into the authorized redirect URIs in Google console. Okay, great. So what that means is we go back here. You see how it says OOTH redirect? You got to give that a copy and then go back over here to where it says authorized redirect URIs. We have actually to paste that in. Okay, once you're done, click create. Now you're going to have two things. You have a client ID up here at the top, which we're going to copy. And then I also have a client secret. So what I'm going to do is I'm going to paste in the client ID here and go over here and I'll paste in the client secret here as well. And you'll get this signin with Google box. Now after you're done with that, this will now open up a Google signin window. Then click the email that's associated with the account that you just created. Go down to allow. All right. From here it is now connected. You can close your window and you've actually now done the connection. Remember that first step where you have to set up that cloud account. What you have to do is you I think they give you like 300 bucks in free credits or something like that. You functionally will not run out of credits. I mean you know your free trial is over but your cloud platform journey doesn't have to be. I think you can like continue doing your API calls below a certain limit or something like that. Anywh who uh from there credential to connect with is YouTube account 2 resource video operation get many return all. I'm just going to have the limit be like three videos for now. Filters we'll leave channel ID. And then what I want to do is just feed in the channel ID directly in here. So, what this is this is just like hooking me up to a specific channel as we see. I'm just going to click test step and we're going to see what happens. Okay, awesome. And it looks like we've now received a bunch of data. That's pretty cool, right? So, we've now verified that we can do a fair amount here. And if I just go back to my little road map here, we've now verified that we can actually get all the videos for a channel, which is great. Okay. All right. So, now that we've gotten all the videos for the channel, what I want to do, well, not all of them, but three of them. I'm just going to pin this data again. So, now I have access to these three items. And now what I'm going to do is I'll go back to YouTube and logically what I'm going to do next is we're going to get a video just like this. Now you'll already have the credential that we added. So this is the second one, the one that I just created. So I'll go there. And then you see where it says video operation get. Well, now we need to feed in the specific video from the giant list of videos that we just got. And you'll find this I think here at least. Yeah, I'm pretty sure that looks to me like a video ID, right? Okay. So now if I click test step, it's actually going to run on all three of these items. But to be honest, when I test APIs, I only really like to run it at one at a time. So I'm going to click on this button in between. I'll just type limit. And what this does is this basically just limits it. So if there were three items initially, now there's only one item. So we're only grabbing the first in this case. You can also go last if you want. I just go first. Okay. So now what I'm doing is you see how on the purple it said three items up here and then over here it says one item. Well, basically that's what this limit node did. It just like took those three and then it like just converted it all into just one. It didn't merge them or anything. I guess what I'm trying to say is it just like deleted the last two. So now that there's just one item as input, when I run this video ID, it should only run once. So click test out. So I'm going to grab a specific video. And that's what it did. It just ran once for one item. Now what's interesting, I'm going to go to see schema view here. It's probably easiest for you guys to understand. So we go back to schema view. What we see now is we're getting a ton more data about the specific video. Like on the left hand side, do you see how there's like no data about the specific video views or anything that I could use to determine if it's an outlier? Well, on the right hand side, we get that data. So, there's a bunch of thumbnail BS. I'm just going to close that. Tags, which is fine. Category localized. This is the title and description. Content details. Okay, so this is um this over here is going to be important for us. PT32s. This is interesting. Uh this is like a timestamp string basically showing how long the video is. In this case, this is 32 seconds. So, I don't know what P stands for. I think T stands for time and then 32 is obviously the number of seconds. Uh, but this could also be like PT 5 minutes, PT3 hours. This is just like the specific timestamp formula that uh for whatever reason YouTube uses. I don't know why they didn't just do the number of seconds. That make everybody's life so much easier, but they use a five um character string for seconds, which is annoying. But the thing is, if you think about it logically, like I don't want to grab shorts, right? So, I'm going to have to do a little bit of math here to uncouple this. And I'll run you through what the math looks like later. But anyway, the statistics are what we want, right? See how it says 12,690 and this one says 382 likes. So you can run out liar detection in a number of ways. Probably the simplest way is just views. But if you think about it, you could also run it on like views and likes. You could run it on just some multiplicative number here. You could like I don't know maybe mathematically you think that one like is worth 10 views just in terms of like its viral power. So what you do is you actually take likes, you multiply them by 10, and then you add them to views. And that's what you do to score them, right? This is totally for free. This is up for grabs. You can do whatever the heck you think based off of your knowledge determines whether a thing is more viral than something else. In my case, you know, I just want to give you guys a simple nugget system. I'm just going to use the view count. But anyway, so yeah. Okay, we get a ton of data here. So, what I'm going to do is I'm going to pin this output. And what I want to do now is I actually want to filter out all the shorts cuz I hate shorts. I think shorts are not representative of this stuff at all, right? Like you could have two systems, one that operates off shorts and long form, but you can't compare them apples to apples. They're so different. There's so many like discoverability issues and stuff like that. So, what does that mean? Uh, basically that means I have to filter out the shorts. There's nothing in the YouTube API, which is really annoying, that says whether something's a short or not, blows, but they don't just have like a simple type short. So, what you have to do is you just have to infer it based off of the logic. So, realistically, if something's less than 60 seconds, I'm going to call it a short. And then if something is over 60 seconds, I'm going to call like a regular, you know, normal video for welladjusted human beings. So under content details duration PT32s, I need to somehow take this string and I need to use it to determine whether or not it's a short. Well, the way that this works, I know for a fact S means second and then if it's a minute, it'll be like PT5 minute 32 seconds. So this here, this string would mean the video is 5 minutes and 32 seconds. I think if it's like 3 hours, it would be 3 hours 5 minutes and 32 seconds. What does that mean? I can actually just use the length of this thing. If this thing is like five characters and then the last letter is S, odds are this is a short to be honest cuz the second you get over 60 seconds it just changes to minute, right? So I think that's the logic I'm going to use. I don't know if it's 100% but we'll give it a try. So how do you actually do this? I'm just going to use the filter node and then I'm going to feed in the scrolling all the way down here. The duration I'm just going to go. length length here and I'll say if this is equal to five and the last letter. So let's go to expression if this ends with um s then I know that it's not a short uh it is a short. Okay. So logically I'm actually looking for the inverse of this. Can I do the inverse of this? How the hell do I guess I say is not equal to. So this has to be not equal to five. Sorry, I'm using string here, but I should be using number. There we go. So this has to not be equal to five. And then this last thing has to not end with s. Okay, that makes sense. So assuming that these two are true, odds are it's probably not going to be short. So I just ran it and uh it says kept zero, discarded one. So that means that basically this node returned nothing, right? Because there was one item here and then it hit my beautiful sexy filter, my anti-short terminator and then you know there was nothing that wasn't a short to remain. So if I want to continue testing this flow logically, what do I have to do? Well, I kind of have to fill it with like real data, right? So I'm just going to change the limit node and then just cross my fingers and hope that I can return more than just a short. Let's go three this time. I'm going to unpin it. I'm just going to go where's the PT? Right. PT. Get all videos. Three items. Now we're returning three items. Now for each of these, I'm just going to test three times. Bang. Bang. Okay, now we've done three. And um underneath this duration, this one's PT32s. If I go to JSON, I should be able to get all them, right? So PT32S. Where is that? Okay, so this is that short. That's what number one. This next one is 38 seconds. This last one's 59 seconds. Oh jeez. You know what I'm realizing? I think this is like This actually sorts all of my videos by shortest to longest. So obviously the first three are going to be shorts. Uh that's brutal. Can I like not do this? Hm. Is there some way to get all videos and then sort it by what do I order it by? Relevance. Date. You know, screw it. Let's just do date. This is This should fix it because I haven't posted shorts in a while. Let's test this again. Okay. All right. Yes, this is uh relevant. He quit his job after his age. You see, I just published this one. So cool. We can now pin this. So I just clicking the node, pressing P. Then I'm going to do limit. And now I know for a fact that the first item is going to be good. So I'm just going to change that to one. All right, I'm liking this. I'm going to go over here. I'm going to test this now. Should run once. Cool. And let's see the duration. You guys see where it's 39 M26s, right? So this should work. I'm not going to pin this. And now when I run my wonderful filter test. Oh, hold on. It's still discarding it. Why is it discarding it? Um h something to do with my math here. So conditions is not equal to five. So that's good. And it does not end with us. Oh. Oh yeah, obviously it ends with us. Duh. Oh, okay. Sorry. I think I can actually just get rid of this. My bad. My bad on the S stuff, guys. Yeah, don't do the S stuff, right? That makes sense. Cool. Well, I said I'd keep in dumbass detour. So, there's one. Uh, yeah, obviously all of them are going to end with S because that's how they do it. They go like PT39M26s, right? Well, actually, if you think about it, if something's like 4 seconds, it's probably going to put in here, right? So, instead of is not equal to, what I should do is I should say it needs to be greater than five because then it'll be at least 1 minute, right? That makes more sense. That's more logical. Okay. Yeah, that's way smarter. Cool. So, now that we have the video, question is, what do we do next? Say, I'm just going to go back to my thing. We now grab the individual video data. We've now filtered out long form. Now, we need to check if it exists in the database. Now, here's the thing. We don't actually have the database set up. So, it's kind of like uh chicken or the egg, right? So, first of all, I'm going to change this to one, and then I'll call this add update. Then later on I'll change um I'll add the other workflow and that'll be like daily digest. So if I go back to my little database over here my fledgling DB. Uh what is it that we have to do logically? Well I think what you should probably do is we have to check to see okay so here's how we can do this. you know this channel ID what we could do is we can make this the title in here we'd store all of the data of the video right um I don't know even multiple and stuff like that but we need some way to like add the channel ID automatically so I'm just going to go to sheets I don't actually know if you can do this can you just like get all the sheets or something looks like you can create a sheet so that's interesting You can also get rows in a sheet, delete rows or columns in a sheet. Is there some way that I could like check to see the sheets? Logically, that would make sense, right? Can I just get rows in sheet check this out for me? Yeah, I'm not seeing a way to, which kind of blows. H I think what we'd have to do realistically. So we have to do one or two is one we rebuild the whole database every single time that we run this which would be very computationally expensive. It would hit the API a lot. It would just not be smart. Or two, we need like an initialization thing where every time we initialize it, like we feed it in a list of channels, then it initializes the whole sheet for us. And then we do that anytime we want to update it. Or what we do is we just have a simple SOP where anytime I want to duplicate this, I grab the sheet like this. Okay? And what I do is I just duplicate this and then I change the ID to, you know, new channel ID. That makes sense to me because then the system would automatically just start dumping into the new one. Okay, cool. I think that's probably what we're going to do for simplicity. I want you guys to know there's a million one different ways to do this and I am a very hacky human being so I prefer the hacky approach. Okay, so what am I going to do with the Google sheet? Well, now that we've identified that it's not short. Obviously, we need to add it, right? So, I'm going to go sheets. I'll go append or update row and sheet. I'll go YouTube resource operation. Okay, document from the list. I'm going to go pick my YouTube trend detector database. The sheet I'm curious about. Notice how I have the channel right here. Right? This is where it's going to get a little trickier and this where I'm going to need to go back and update my logic probably, but I'm going to need to use the ID connected to the specific channel where I'm adding the video. Okay, the thing is we don't currently have any columns yet. So, we have to do now is we have to map all of this data or all the data that we're actually interested in that is. So, what data am I actually interested in? Well, I'm obviously I'm interested in a couple things. So the ID of the video, right? So I'm going to add ID here. This is going to be probably like my unique identifier, right? Like this is the one thing on all videos that's always going to stay the same per video cuz if you think about it, people can change their title. People can also change their description. Like this is the thing that I have to like use as my unique thing. And all databases need some sort of unique thing. So I'm going to do the ID for sure. What I want as well. Published at that sounds like it's good to keep track of. Channel ID. I mean like I kind of have it here, but I don't know. I'll just I'll add my channel ID. Maybe it's just going to be easier for me. Also, I'm realizing that I'm changing my conventions. There's two major conventions in programming. There's um camel case, which is where you go something like that. And then there's I think it's called snake case, which is like this. Uh and I'm changing my conventions right now. Like I'm going from camel case to the other thing. I think most things in nad use um snake case, so I'm going to do that. That seems simpler. Okay. So, next up, what do we need? We obviously need the title. That seems good. Description. I mean, I feel like the description is good to keep track of. Let's see the description. Screw it. Thumbnails. Hm. Looks like there's three types of thumbnails. There's like small thumbnails, medium thumbnails, high, and then standard. So, why don't I do Why don't I just do it like this? Small thumbs, medium, uh, sorry, there I go again. Small thumb, medium thumb, large thumb, standard thumb. This is just going to be the URLs. I don't really care about the heights or whatever. Channel title. Is that necessary? Yeah, I might as well. Right. We'll go channel title. Okay. What else do I want? Uh tags. I could theoretically just dump all the tags. So, I should probably do that. I'm so lazy. I'm like, do we need the tags? Yeah, we kind of need the tags. All right. Category ID. I don't know what the heck that is. I don't really think it's that valuable. I'm sure, you know, you can imagine a world where category ID is valuable, but I don't, you know, I don't really know what that means, so I'm going to leave it up. Okay. Content details, duration. That's gonna be important. So, we'll go duration, dimension, definition. I could see you running some stats on that stuff, right? Like maybe in the future, like you notice that most multiples are HD or something that might give you some data. Okay. And then ultimately, the stats are what we actually care about. So, good god, look how far off this is. Okay, the way that I like to organize my databases is I just like to have like the most important information first and then I stick all the less important information later. So, to be honest, we're going to need to rearrange this. Like, are you going to care about the thumbnails for most of these? No, obviously not. So, we're just going to dump these all the way to the right. What are we actually going to care about? If you think about the view count, like count, favorite count, and comment count. I don't know what the favorite count is, and I don't know why nobody favored that video. Granted, I did publish it yesterday, but can y'all please favorite my video? I would love you a long time. Okay, so views. Just go views, we'll go likes, we'll go favorites, and then we'll go comments. Now, depending on whether this is normal, maybe we'll go title, views, likes, comments, favorites, and then if you think about it, like channel ID, channel title. Yeah, we don't need these either cuz it's kind of self-evident. Like it's good data to have just because it'll make my life easier when I build out the rest of the stuff, but it's not necessary. And then embed HTML. That sounds fun. Let's go embedd HTML. Cool. So, I'm pretty sure now we have everything that we need, right? I'm just going to rearrange this by selecting everything and then double clicking on the um column tag. So, we have the ID, publish at, it'll say title, then it'll say number of views, likes, comments, and it'll say favorites. Then it'll say the description, tags, duration, definition. Cool. Looks like a pretty good database to me. This is going to be our template DB from now on, right? So I just wanted to make it perfect or as perfect as possible. So now what we have to do is we actually have to add it to the sheet. Actually first thing we have to check if it exists in the database. My bad. After that though we'll add it. So let's actually just implement the adding functionality right now and then we'll do the checking if it exists functionality afterwards. So we need to map it manually. So I'm just going to fetch the columns by refreshing this. It's not finding it. Why? Uh, I think we might have to go I think I might have to refresh this or something. Kind of annoying. So, I think I got to feed in the channel ID here. Okay, so the way that the append or update row works is there's an ID column that you actually match incoming queries in. So, that's actually cool. We don't actually need to do the logic. I just talked about because it'll just automatically find old entries and update them using the ID column which is incredible. So fantastic. Boy is NAN fun sometimes. Now what we have to do is we have to do this annoying laborious process of just mapping everything. So I'm going to map the title over here. Uh now I'm going to go down to the views cuz I was a little presumptuous and I wanted the views to be first. I wanted all my viral videos to be first, baby. Okay. Favorite. Cool. Now we'll go all the way up to description. Uh tags is going to be interesting if you think about it logically. Like look, there's a bunch of tags here, right? So how do you actually like get a specific tag in there? Um you got to use string logic. So just going to put tags in. And then you see how it's an array right now. You just join the array um with some sort of delimiter. And now we have all the tags here. I like adding a delimiter and a space. I don't know why. I just I think it like works and looks better. It works better with more platforms. So I'm just going to do tags like that. The duration. That was pretty interesting, right? So where's the duration again? Okay. right over here. PT39M26S. So, h uh okay, I'm just trying to think in my head, what's the simplest way for us to do this that works on arbitrary strings? Let's just open up the expression handler. So, if I split this based off the presence of an H, what do we have? The whole thing, right? If I split this off the presence of an M, what do we get? Is it always going to be PT? Let's just do GBD40 PT3926S 39M26S. What is this format called? Let's see if we can get an answer from the lovely Galaxy Brain. ISO 8601 duration format. I want to parse this functions and turn it into the number of seconds. Simplest way. Let's see what it tells me. Paris, duration. It matches this PT whatever. Oh, I get it. So, it's actually extracting three types of data. The number of hours, the number of minutes, and the number of seconds. Uh, could I just match this inside of here? Let's see. So, this is reject. No, match. Okay. So, yeah. Yeah, we got a match over here. Can I just copy this? This would be sick if I could just copy this. Maybe I can. This is going to look like magic if you don't know what reax is, but it's actually pretty cool. Yeah. So, it just did that. How neat is that? Okay. So now if I want to get the duration, what I have to do, is this matching globally? I think it is. What I want to do is it looks like this array will always have four elements. Okay, it's always going to have pt what? It's going to have the full string first, then null, the number of minutes, and seconds. If you think about it, I just multiply the number of minutes by 60 to get the number of seconds times minutes. Uh, right? So 39 * 60 would be the number of seconds and then 26 I just add that. So I'm pretty sure what I have to do is I think I have to add this is going to be tough. What's the simplest way for me to do this in such a way that like isn't super complicated. I mean I could just use a code node. I'm just trying to stay away from code nodes because I want to keep this really simple. Okay. So in an array you can index it with square brackets. So zero would actually select the first element. One would select the second element, two would select the third, and then three would select the fourth. It's zero indexed, right? So, it actually goes like there's only four elements. So, if I put four, it's selecting the fifth. We can't see it, but three, we can. So, what does this mean? If we want to get the total duration in seconds, I basically grab the total number of seconds, right? And then let's just actually, yeah, we kind of have to do code here, but anyway, I'm going to grab the number of seconds. I'm going to add it. Oh, it's not allowing me to add it because it's a string. I think we have to go probably two number here. Then we'll go two number. Now it should be 52. Okay, cool. So, what we do next is we go two, we multiply this by 60. That's the total number of seconds in the video. 2366. If you think about it, you kind of need to do the same thing for 1 * 60 * 60. And voila, we should have a relatively consistent way to always get the number of seconds the video. Sanity check here. Let me go back here. 3926. 39 * 60 + 26 is 2366, which looks pretty good. Okay, so we did end up doing a little bit of code, and I'll be honest, it's not very pretty. This is um probably one of the uglier expression fields that I think I've made in my life. It's not very maintainable either. But what this is doing logically is this is using a regular expression to parse out three fields. The number of hours, the number of minutes, the number of seconds, and then it's just saying we're adding up seconds plus 60 * the number of minutes hours. Okay. All right. Now, everything else here should be pretty easy. So, we're going to go definition. That seems good. It's for small thumbnail. I'm just going to feed in the default URL here, medium URL here, large URL here, and then standard URL, which I guess is the biggest. Oops. Um, I think I just deleted a field by accident. Channel ID was next. My bad. Channel title. So, channel ID was uh what? Right over here. Channel title was right over here. and all the way at the end. Embed HTML is over here. Okay. Good god, that took forever. Should now have everything we need, right? I think so. So, let us cross our fingers and add. Going to go over here, click test step, it's not adding. So, okay, it did end up adding. Very cool. I don't like how it bumped it and made it really big, though. That's uh going to make looking at my database a pain in the ass. So, what I'm going to do here is I'll select all. I'm just going to drag this column to approximate what I think the normal size of a column is. That's a little short. There you go. So now all future columns will look like that. Also, uh I'm going to rearrange this. I just double tapped on it again. It's a little big. I don't like the description being that big, but I do think it's important that I can like read it at a glance. So I'm just going to rearrange the description. Rearrange the tags as well. That looks fine to me. Duration looks good. Small thumb, large thumb. If I just copy this, paste this, am I going to get the thumbnail? I will. Wonderful. Got the iframe as well. This is just something you can embed in your website, which is kind of neat. But yeah, I don't need the whole thing. So, I'm just going to make this a lot smaller. Okay. So, yeah, now we have the database template. And if you think about it logically, what we can do is we can just duplicate this, right? Just duplicate this over and over and over again for every video. So, if I go back here, uh, we've now done two things in one shot. We've actually automatically checked if it exists in the database with the append logic. And then if it does exist in the database, then we don't add it. We update it. And then if it does, it Yeah. So, we've actually finished this first system a lot faster than I thought we would. Very cool. All right. So, what I want to do now that I've tested this on one item. Um, basically anytime you're building any sort of NAD system or really any automated system, test on one item first and then once you're done testing on one item, test on basically like two plus systems. This is sort of like your I don't know if you want to call it like your order of operations, but one is simple, right? Because it's very easy to get up and running with one example like we did a moment ago. Now, we need to test it on two examples. Odds are if something works on two examples, it's going to work on like n examples where n is the total number of examples that we're feeding it in. Like if it works on two, it's probably going to work on eight. If it works on eight, it's probably going to work on 3,894. Like that's just, you know, a programming thing. Logically, when you go from one to two, what you do now is you implement loop functionality. And that's one thing we have to verify. So we figured it out for one. Why don't we now try running this like on an actual practical test for two? Notice how everything right now is pinned, though, right? So I'm going to do is I'll go all the way back here, unpin. So, I just pressed P. And then this Google sheet here. Should I delete this? Yeah, I'm just going to delete it. Okay. And then I'm just going to save this now. Always save. And then where it says limit, just going to do the limit to two. And actually, I'm realizing that I think we can just set the limit here, right? No, here. There's no need to do this. That was silly. So, what I'll do now is I'm actually just going to um do the limiting directly in the YouTube node here. Do two. Okay. And I'm not just going to test one. And I'm going to test all because um this is where the looping logic comes into play. Now it's adding or appending. And it looks like we got two. Cool. So let's just verify everything here worked fine. What is the duration of this video? That's like one pretty complicated piece of logic I implemented. So let me just double check. It's actually 1,628. I pretty sure it is. So 27 * 60 and then 08. Yeah, that's true. That is actually it. Cool. Yeah. I mean, you know, I think we did it now. We've verified the test. Uh I guess there's one more test that we need to do. If you think about it logically, like we've tested that it works on one channel. So now let's test that it works on multiple channels. Okay. So I'm going to do now is I'm going to copy the ID that I have over here. And then I'm just going to do it for another channel. And then if I run into issues there, I'll I'll, you know, figure out the issues. So, who whose other channel I want to do Leonardo Gregarios just cuz this guy's one of the nicest. Okay, there we go. Okay, so copying this now. What am I going to do next? Well, I'm just going to add it to my channels thing. Paste it in. If you think about it, I have my SOP, right? Like for all the channels I paste in, I'm just going to duplicate this now. And I'm just going to go over here. I'm just going to paste in the new ID. And I'm just going to like delete both of these. Let's delete this. Now when I rerun this um what I should do is if you think about it logically I should grab both of the rows from I should then get all the videos for the person then I should get all the specific videos of the all the videos ids and I should filter out all shorts and then I should add or append to the sheet and it should go logically newest to oldest. Okay, I think this is his ID. We're going to give it a go. So it's reading the sheets 2 44. I mean, mathematically looks good to me. Now it's updating all four. No, it didn't end up working. And why? It looks like we just dumped all of Leo's videos into my channel. H, ain't that a metaphor for life? Uh, okay. I think I know where this happened. The appender update right now is probably hard- coding me, right? I'm feeding in JSON. nippet. ID. What I think I need to be doing is changing this dynamically, right? So, right now we are feeding in this channel ID here. No, this should logically be working. This channel ID should change, right? Should change. So, let me just check the JSON now. I'm checking the JSON of the entries. Channel ID here was UCBO. Whatever. Okay, cool. That's fine. This other one should probably be UCBO as well, cuz that's me. Okay, now for this one, that should be not be mine. Should be UC8. Yeah, UC OB. So that looks good. Is it not finding mine? Maybe H. Maybe I just copy this now. Paste this in. Is this the same thing? Yeah, this is the same thing as this thing. Not really sure where this issue is arising from. Well, there may be like some builtin logic, a built-in that prevents it from iterating. I've seen this happen before. I feel like this has actually happened to me before where NAN does not have the ability to do this, believe it or not. So, what I the way that I saw this before was I added a bunch I created a bunch of new Google Sheets, one for channel. That's not going to work now. at all. So, I'm going to have to find a new solution to this and I'll have to do a live. So, what I'm going to try is naden has an additional piece of functional. This is a bug to be abundantly clear. This should not occur. Logically speaking, we're feeding in new channel IDs to a sheet and that variable should persist should not persist. It should reset at the beginning of every uh loop. But for whatever reason, it's not. So, we have to do is we have to feed we have to take a fundamentally different approach. I'm going to do this approach using the loop over items um note. Okay? I want you know that stuff like this is going to happen. What's important is that you just don't freak out get super emotional about your system just not working cuz if you do you know the likelihood that it will continue to not work is much higher than if you don't. So what I'm thinking we're going to do is for every item that comes in we're actually going to loop over every individual item. Okay. The loop over items batches is basically a place uh a node that where you can add a loop route and then done route. The loop route just is whatever you're planning on doing. And then the output feeds back in. And then basically for the number of items you feed, in this case two, as we see, um it'll run once and then a loop and then it'll run twice and it'll loop again. So what I want to do is I want to see if me adding a loop over items node changes anything here materially. So I'm going to go back here. I'm going to delete all four of these. Then I'm going to see if I might need to update the logic. So I'm going to test this first. Then I'm going to see if maybe there's some way we could reset the data because this, you know, it seems logical to me that we should be able to. Okay, so we're still dumping this in. We're still dumping everything, which is unfortunate. So let's see why. At least now we can actually logically run through both the items that are fed in. Okay, so this was run one, which will have two items, which is for me. This is run two, which will have two items for Leo. So the input should be the channel ID. Oh. Oh, I'm actually hard coding this now. My bad. Uh, we should actually be able to dynamically encode this now, right? So maybe I actually screwed up here. Maybe I wasn't using a variable here. It's not over. It's not over just yet. We might have already fixed this. Okay. Now, what I don't like doing is what I'm showing you guys here where I'm constantly hitting the APIs over and over again. So, my recommendation is don't get yourself in this position to begin with because if you're constantly hitting these APIs, it's just a matter of time before one of them rate limits you and says, "Hey, you know, you've done this way too many uh way too many times. " If you think about it, we're doing two API calls to this, another this per API call, right? It's like four API calls total. Okay, I'm seeing um a missing parameter here now. This is good. It means that we're actually moving forward. Oh, what the hell's going on here? We got to re do I have to remap all this. I don't really know what's happening there. I think what happened is when I changed this out for the snippet variable, it momentarily pulled it off and then when it pulled it off, the variables here just stopped. They like disappeared. So that's not good. I wonder if that's going to happen every time. This took me a quite frankly stupid amount of time. So that's annoying. Um, but I'm just going to cut through this to save yall a little bit of time. Okay, I just ran this with a subset of the data so that I didn't have to remap it all if and when it inevitably broke and it worked. So, we actually got it. As you see, we have one here and then another here. So, what I'll do now is I'm just going to fill out the rest of this. Pretty stoked about it, though. I knew there'd be some issue and I'm glad that we got to work through it live. Okay, this has taken me a fair amount of time to do. So as opposed to trying to work it out logically, I'm actually just going to paste the code that I'm currently working on to try and uh recreate that duration match directly into chatbt and then say building this inside of N8 TLDDR. I'm processing an ISO 8601 duration code and trying to turn it into a number of seconds. Here's what I have above. Debug why it isn't working. also need it to work even if there are no elements found. Let's try that. H I see. Well, that's very good. They're giving me a little snippet of code here. I don't know if this is true. H does look very good. Yeah, that is the number of seconds that I was looking for wrapped in this little function execution thing. So, it can be used directly inside of NAD. It's not as good as a code block, but this allows me to not have to use a code block. So, okay, I think I'll leave it there. This seems somewhat robust. I don't know for sure. Sometimes AI code just blows, but this is enough for me to actually run the test, which is what I care about. And then instead of me worrying about like whether or not it's perfect or complete, I'm actually just going to run the test and I'll let the test tell me. Okay, so let's test it. And you know, in reality, your systems won't cover all edge cases. The idea is that they just cover most edge cases. That's number one. That's my channel. And this should be Leo's channel now. Yes, looks good. I'm not seeing any tags on his videos. Why is that? Oh, wow. He must just not add tags. Oh, dude, you got to add some tags. Just make a note to text him. Bro, you got to add tags. All right. I'm sure he's going to find that pretty funny. Let's close this up and then Yeah. Okay, cool. So, we've now done that first section. And if you think about it, that's actually all we need to do because we just tested that it works on one. Uh then we tested that it works on two. So, we should now be able to do this on basically an infinite number, assuming that we don't rate limit out and stuff like that. It's a problem that some people will have. Next up, what we have to do is we have to do a daily digest. What this daily digest was going to do is it's going to grab all the data inside of that database of ours. Okay? So, it's going to list all of them. Then, for every sheet inside of our database, it's going to go through then it's going to get us all the videos. And then what we want to do is for every video, we want to calculate the average. So, for every list of videos, like this is a list of videos here, right? We want to calculate the average number of views and then we're going to use that to determine the multiple of the new video. And then if the multiple is over some threshold, aka the threshold that we define, which I think I'm just going to do like 2x like you know in multiple detection if you think about it logically, there are a variety of different ways you can do this. You use like 2x like 5x like 10x 2. 5x. You could have it like change with time. You can define it somewhere else and you can do anything that you want really. I'm just probably going to do 2x cuz that seems simple to me. and then I'm going to include it in some sort of like daily email digest. Sounds fun. Okay, so let's build out the logic for that. That's going to be another NAN function or workflow. The way that I like to organize these is I like to tag them one. So Nad course and then also I like to do them same way I used to do them way back in the day on make. com. I used to go title YouTube trend detector ad update and then I'd go to YouTube trend detector. This would be daily digest. All right. So, these two are separate, right? And I'm running this manually right now, but if you think about it, realistically, you should be running this on some sort of schedule. So, we should add the schedule trigger to this instead. And then at you, you could add multiple triggers. So, now it's scheduled technically if I turn it on. Schedule I'm going to add is let's just do one. Let's trigger at I don't know 6 a. m. or something like that at minute zero. So basically now every morning at 6, assuming that I turn this on, this is going to run and then it's going to proceed through the rest of my flow and just, you know, get that first bit of work done, which is nice. And just because there's nothing on the done loop, I'm just going to add this here. This isn't the prettiest, but I Well, yeah, I think that looks okay. Notice how this was completely unnecessary if there were not bugs in NAN. There were bugs in NAN, which prevented us from doing this just cuz some of the data persisted when it probably shouldn't have, but you don't actually need this loop over items. Maybe future versions will solve this automatically. Okay, so let's do the daily digest. First thing we need to do is we just need to grab all of the data in all the sheets, right? So, this is going to be kind of tough to do. I don't actually know how we're going to do it. Um, what are we going to do? do, ladies and gentlemen? All right, first thing we're going to do is we're going to grab all of the rows in the sheet. So, I'm going to go over here to channels. Just grab all the channel IDs. That makes sense, right? Talking to myself here, right? Okay, we're going to grab this. I'm going to test this. We're going to grab channels. Very cool. All right, I'm going to pin this. So, now we're going to have some pinned outputs here. Now that for every channel, what do we have to do? Well, we want to get all of the uh rows in the sheet for the sub channels. So, what am I going to do? I'm going to add my credential. And we may run into the same issue that we just ran into, by the way. So, YouTube trend detector database right over here. The sheet that I'm going to feed in is I'm going to use the name and then I'm actually just going to drag this in. Now this is going to be the expression node. So now what we should get logically, okay, is we should now ping this twice and then return four items in total. We're feeding into we should get four because um every uh sheet currently has two. So one, two for me, one, two for Leo, right? So I'm going to test the step and we'll see if we get it. As always, I'm like expecting to get a certain output and I'm actually just rating what I'm expecting to get against what I actually get. Okay. So, the way that I like to see the data is using JSON. It's just easiest. So, he quit his job. That's good. Fix 90%. That's good. He quit his job. Fix 90%. Notice how this is run twice now on the same um ID. Okay. So, this is the same problem that we were running into before. Logically, we're going to have to find a way to solve it. So, let me think. Where are we going to add this? We're probably going to add the loop here. Delete the replace me. And then I'll delete this little route. And I think probably going to have to do is loop over this. So what I'm going to have to do actually is first I'm just going to run this so I can grab the data from the loop over items node. Okay, let's just test this workflow. Let's see what we get from here. We got a loop branch with one item with the channel ID. Okay. Very good. Now that we have this, I can actually map this individually. Now what I'm going to do is I'm going to let this loop over this. And now go. And let's see if this fixes it. Do once, twice. Okay. And now we have two runs. So first run was me. Second run is Leo. Very cool. So we've actually gotten everything that we need. How cool is that? All right. So where we at now? So we got all sheets. We got videos in each sheet. Now for each list of videos, what we need to do is we need to calculate the average. How do we calculate the average? Well, in NAN, we're returning a list, an array. Okay. And in this array, we have views right over here. So we should very easily be able to determine the views mathematically just by doing a little calculation. I don't know. Is there like a calculate node? I don't think so. Do we need like a set node? Probably a set node, right? So, I'm going to add a few fields. I'll add views here. No, we can't actually do this. What we need to do first, sorry guys, is we need to aggregate this, I think, cuz this is currently many items. What we need to do is we need to put it into one item. So, I'm going to aggregate H. It'd be really nice if we could just get the average. Okay. I think we might need a code node just to make it. There's a million in one ways to do this. I think the code notes is probably the easiest because what we do is we're going to just aggregate all of this stuff. Now, so for con item of input. all, input. all just gets all of the items that are being fed into it. I think what I'll probably do here, first of all, I'm going to pin this output right here. Okay. Second of all, I'm going to go const, let's just say array equals input. all. I'm going to return array. We're going to uh I don't actually think this is going to work. Okay. No, it did work. Once we have an array, we're going to need to perform a mathematical function on that array. You know what? Why don't I just have AI do it as per usual? It's funny. Every time I'm like, I'm just going to code this myself. I'm like, well, I could do it myself. Or I could just have AI do it. So each entry includes a views parameter. I want to get the average each item in the input of each item's views and then filter so that I only output items who have view counts views over 2x the average. Okay, we're going to generate new code here. You could also ask anything else. You don't have to ask this model specifically. Just wanted to see if I could do this easily. So logically this is grabbing all of the input items. It's then mapping them. It's grabbing the item. It's getting the JSON. Then it's getting the average views. So it's doing what's called a reduce function to get the average. This is unnecessary to be honest, but it does it anyway. This is very proper filtered items. Cool. This is the multiple that I'm going to be using two. And it looks like it looks pretty good to me. I'm going to return this now. See what happens. No output data returned. Well, that doesn't make any sense logically, right? it has to return some sort of output data. The reason why is because logically speaking, if you have an average of two items, the average has to be higher than one item and then lower than the second item, right? It kind of makes sense. So, we should definitely have some sort of data here. It might just be because I need to reloop this. I'm just going to test this workflow out and just see if I dump it like this, what happens. No, it looks like we're feeding in the items and then it's not really calculating the code, which blows. your current code doesn't work. Come up with a simpler way to determine the average of all of the items and match it against that average. Let's try this. And then if that doesn't work, then I'm just going to ask Chat GBT. Consulting the guy that made all this stuff up. That's funny. Going to test this. I'm still not getting any output data. So I think logically there's just like some silly issue here. I'm going to run this through chatbt. What did I do for aski? Okay, it's giving me interesting idea. Um, I just got to copy the code here. Okay, so we're now going to add a little bit of debugging logic, it looks like. Now I'm going to open up my code editor, go to console, then I'm going to test this one more time. says, "The average uses 4,456. " Oh, no. That actually looks okay. Oh, you know what it is? I think we're running this just once. Uh, let's see here. Feed in two items. Feeding in both of my items. Then we're calculating the average fees. Now, what are we calculating? We're calculating my average fees. So, 3,999 for the first item. Then 4,913. So, the average to this logically 4,456. Cool. But no, it's not returning the items that have more. That's annoying. There must be some other problem here. I don't like it. Okay, so it looks like we are feeding in two raw items. Looks like all view counts are here. Total views, average views. Okay, looks like it's adding up the views. H it still seems to be doing it weird. I really don't like using the code blocks. So I'm just going to cut that out. Use the filter here. What I want to do is say views is greater than. Okay. I ended up solving this with a simple filter node instead of a code block. And then I fed in this expression, which probably seems pretty intimidating to you, but let me walk you through it. So we grab the previous node, this Google Sheets node here. Then we get all of the items. In nad, you can get an individual item using the dollar sign JSON syntax or you can grab all of the items by referencing the name explicitly and then using a doall. Then I'm using a function called reduce. Now what reduce does is it's just a simple well it's a unfortunately complicated way of just calculating the average. It'd be really cool if there was just like an average function or something like that. Maybe there is maybe I'm just making this all way too complicated. H I don't think so though. Yeah. No, I think you have to do unfortunate code. But um what this is doing is this is reducing then it's grabbing the sum and the item and then it uses this arrow function to just like add it up. So this basically just grabs the average and I'm dividing it by um the total number of items here. So this sorry this gets the total number of views. This divides them up. For instance, if there's 10,000 total views and that's across two videos, the average is 5,000, which is actually pretty close to what it is. And then what we do is we just feed that into a filter block and we say, "Hey, is the number of views of this individual element greater than the average number of views of all? " Pretty straightforward, right? Pretty simple. If so, and we return this, which is cool. And then after I'm done returning this, if you think about it logically, what do we have to do? Well, we want to accumulate all of these and then we want to send them out in an email, right? So, I don't just want to add the email here. I actually just want to like run through my whole thing. I'm going to add my email over here instead. But just for my own sanity, what I want to do now is I just want to test this filter out on all of the data. So, I'll go test workflow. Grabbing the data from the Google sheet, doing the filtering steps. Cool, cool. And it looks like on run one there was one item kept. That makes sense because out of two items, obviously, one and an average will be kept. And run two, one item was kept as well. These would be our outliers, for instance. And now we have those two items accessible to us in the done branch as we see here. And what we want to do now is we just want to make an email delivering these. Again, there's a million in one ways to do this sort of delivery. I'm just going to do a Gmail. So I'm going to do is I'll say send a message. And we don't need to loop the done. Just leave that over here. We do need to loop this though. Okay. So then I'm going to move this lower right here. This right over here. Simplest way I found of organizing this stuff. Maybe that's annoying that we can't do one more. Oh, na. Why must you do this to me? Then we're going to add our credential. So, this is the same idea as before. You just click create new credentials. Sign in with Google. So, I've already created a bunch of credentials here. So, I'm going to close this. Then, I'm just going to use my Gmail account number four. And hypothetically, I'm just going to um send this over to my personal email. And then I'll say daily digest trending YouTube videos. email type HTML over here. Um, you don't have to do HTML. I'm just going to copy all this and feed this in AI. So, let me see if I could just go from here all the way down to here. I'll say above is a bunch of data on trending YouTube videos. Format this into a simple HTML email I can send. It's part of a daily digest. Oh, sorry. All I care about are uh let's see here. We want the title, the channel, the thumbnail, the video duration, and the multiple. Right. We should totally do the multiple. Uh okay. Well, let's just keep this for now and then I'll add the multiple afterwards. Multiple is really cool to have. Now, it's going to format this as an email. Hopefully, I'm just getting network connection loss. I'm not entirely sure. It might just be my hotel internet. Okay. Okay. Here we go. Here we go. I don't know what this is. What is this supposed to be? H. This is hardcoded, right? I don't like why this is how this is laid out. Okay. Daily AI video digest. Cool. All right. So, missing title. We'll put the title here. Channel title. Duration over here. Okay. Cool. We'll do the thumbnail here. Okay. Uh, hold on. This is just one. I should say meant to say include the variables as let's do that. That way I can just very quickly find and replace all the variables in a moment. Cool. Going to copy this now. Paste this in. Oh no. I don't want the each man. Ridiculous. Yeah. Yeah, we don't want each. That sucks. I don't Can we actually do that? Maybe we can. Fascinating. That would be pretty cool if we could. I don't know if we can or else we don't know. We could uh I could do the logic here, but actually, let's just send one to start and then I'll worry about everything else later. Okay, we'll just send one to start. We'll go expression. Let's do this. Uh image source, we'll just do large thumb here. Title of the video will be I just want to um get something on the page. Basically, I just want to have an email sent so that I can very quickly and easily identify whether or not it's like a trash template. I don't actually care about all of this data too much. Duration, this kind of sucks, but seconds, I guess. And I'm just going to remove this each thing. So we should just pump out one of these now. Okay. And then I only want this to run. This is running twice now. Why is this running twice? H. We need to aggregate these is what we need to do. That's my problem here. I'm just going to aggregate all item data into a single list. Uh it's unfortunate because we have to actually run should have to run everything here. So let's just do this um test workflow aggregate. That should now be my output. Cool. Very cool. And then now probably going to have to remap this. Yes, I will. It kind of sucks. Oh well, that's what it does. Let's just do the first. Okay. So, we'll go source and then I'll go large thumb. Then here under title data zero. title. title, channel title. Cool. And duration. Awesome. Now, if I test this, should send an email. So, I can go to my email and just see how bad the formatting is. Usually, the formatting is like not the best. Okay, looks pretty good. Couple things that I don't like here. I don't like the size of this. Can I make this smaller? Probably. We can probably make this smaller just by changing the Yeah, we're using the large thumb here. I'm going to try using the small thumb for one. What else don't I like? I don't like the fact that it says this email was sent automatically with NAN. So in NAN, you can change that. Just go to append NAN attribution. Turn that off. Looks pretty okay, honestly. Fix 90% of your AI agency problems in 30 days. Okay, let's try it again. We go here. H Oh, nope. That's pretty blurry. I don't like that. Yep, that is a little small. Can we change the image source class thumb? Oh, you know what? That's the problem here. That's the problem. Let's just change 240 pixels and then we'll still use the large thumb, but we're only going to be at 240 pixels. Okay. And then what else do we really want? I guess we just want all of them. So logically, how do we do that? What we have to do is basically for every item inside of aggregate, we just have to generate this. Why don't I just paste this in again and see if two emails look okay. This looks fine now because it's in 240 pixels, right? Yeah, it's not the best just to have them all laid out like this. Kind of wish we can go like uh like lengthwise. If I feed this in here and then I say just feed this into AI, let's do GBD4. 5. And I'll say this is an HTML template that's supposed to return a nice looking minimalistic list of higherforming YouTube videos. I put two as an example, but it should scale to infinitely many. Right now, this looks poor. This looks bad because they're stacked on top of each other there. And I don't like the formatting etc. fix this so it looks nice and clean and the videos are side by side in some sort of clean uh minimalistic but sleek grid pattern. Okay, there you go. We're going to see how that performs. We'll keep the two and then after I'll deal with the logic on generating multiple little video grid things. The thing is like emails just inherently lack like the ability to do some cool formatting which sucks. So, oh sorry I didn't mention this. Sorry, sorry. This is an HTML email so it needs to be formatted in light of that. Right. Emails are formatted a little bit differently than um websites. So, it needs to be um tables instead. And it doesn't look like it's a table. Okay, cool. Anyway, it's going through and it's now creating me this little digest, which is nice. Okay, let's expand this little code block now. Hide this sidebar. And then what I want to do Oh, we can actually preview the output. No way. Got a little um HTML thing in here. Doesn't look very good, not going to lie. Oh, don't tell me it's using each. Please don't use each. Damn it. It's totally using each, isn't it? We just said no each. Yeah, it's doing each. That sucks. We just need to go TR now. I think is this. Okay, cool. That looks much better. Um, so now we have basically this nice infinite layout. So if I added more, we'd be able to do more. Now I just have to generate the code. 1. 7. My bad. Where does it do the 1. 9? Does it have a 1. 9? That'd be much Yeah. 140. Uh, what's the width here? It's kind of like 170 probably maybe 180 need 20 on both sides right try And then I'm also going to change this a little bit. So we'll say Cool. Looks fine. Nice. All right. Cool. Cool, man. Nice. I'm liking this. Uh, this array. Um, why is the array the same though? Oh, that's a problem, man. Why is the array the same This array should not be the same We should have different items here. I mean, it looks it feels a lot better for sure. Let's test this now. This like still a really weird cut off here, man. That's not right. Thumb container object fit cover. Dimensions. What are dimensions? I don't understand. What are my dimensions? Height and width. It is 100%. Um, so where is this being applied? Can I do like 160? Oh, 170 180 160. I feel like it's 155. Okay. You know, it's probably 155. So where is this go back here and then instead of 155 send this now that way it's not going to be super skinny again. Nice. Oh yeah, that's perfect. Got the whole thumb, baby. That's what I'm talking about. Okay, we can just set Nixar and duration whatever on the same line, can't we? Mhm. Okay. So, let's just uh get a list of things we want to do now. Is there padding over here? I don't know why there's padding over here. Just remove the padding. Okay. So, we've now verified that this works on two. I was just feeding in examples of the exact same code snippet, um aka like the same thumbnail and stuff, but now I want to make it so that it works with different thumbnails. So, I'm just going to jump in and actually make a little do a little snippet of code to handle this for me. First, I'm just going to trust that this runs on data. At least that's what I'm doing right now that I'm pinning, but I actually want to like have it run on live data. So, I'm actually just going to test this workflow and see what's going on. Okay, looks like we've now sent one email and I believe it's going to be the same thing twice, right? Okay. No, no. We're actually getting um we're getting data directly from So, that's actually fine. Uh, looks like the sizing is a little bit off. I don't really know entirely what's going on with that to be honest, but uh, yeah, these are actually the trends. So, to be honest, like it kind of already works. Um, one big thing that I want that we don't currently have is we just I just want the multiple. So, that's one thing that I have to do. So, how am I going to do the multiple? H, well, I guess I could just put the multiple right next to the title, right? So, I think that's what I'm going to do. I'm going to go into the HTML template here. And then where I have the title, which is right over here, I'll add a span for the title. Then I'll also multiple. And for the span, I'm just going to go style equals font weight. And then I'm just going to go bold. Then over here, I should have be able to come up with like a little multiple. Now, what is the multiple? Uh well the multiple is going to depend on this. So guess I can just copy all of this. This is not at all clean. I'm going to be doing calculating like directly inside of the template which most people do not recommend. Okay. But still f it we ball. Try not to swear as much. Let's think about this. What are we doing? I know what I'm going to do if you think about it logically. What we need is we need this filter to open an additional field. We need to drag all of these in. Can it just automatically map? No, I can't. Okay. So, yes, I can. That makes sense. What we want is we want that average, right? So, I'm going to go to filter and I'm going to calculate the average. Then here, I'm also going to say average, feed that in as an expression. So, now we're going to get the average and we're going to get this. So, what this means is when we actually feed back to the loop over items, it's going to have everything that we need. I'm just going to undo this and test it for myself. It's going to include the average, which we can then use to find out the multiplier. We get everything we need plus the average. Wonderful. Now we connect this to the aggregator node. Now I can use this average to Well, sorry. I guess I need to run this. Um, kind of annoying, but it's what it is. We actually have to run all this with the aggregator. My bad. So, we're going to hit the APIs a bunch more times. Let's see if we get a rate limit issue. Nope. I'm just too crazy with it. We get the average. Wonderful. We're going to pin this. Now, we're going to connect this. And now that we have this, we can actually go through the HTML template somewhere over here. Establish that average. So, let's do the math. What I'm going to want is this going to work? Does that work? I don't know. I don't know what I've been told. Yeah, I don't think we have the ability to do each. This doesn't really solve my problem. Oh. Oh, it does. It does. Okay. Okay. Wait. Uh I'm aggregating here. I aggregate up there, right? Wait. What happens if I feed multiple items into this HTML node here? What happens? I don't know. Let's give it a try, man. Why the hell not? Um, We're not getting the data anymore. Are we getting the data anymore? Jason data one title, right? Oh, yeah. Yeah, we can just do this, right? Delete that. Cool. Let's delete the TDS. No, we just get rid of that, too. Okay. So, where's this video table? I doing a table per video or what? I think we just deleted this Let me delete all that Delete that for sure. It says TR. We can probably just output them bunch of these, right? Let's kind of do that again. Um, why is it doing the same thing? because I'm returning the same thing. Well, I mean, this is what I wanted. So, now I just concatenate them, right? Yeah. So, I just go here to the video table. Cut out of this. Now, what I do is I do this No, I do Yes. Right. That's what I'm talking about, baby. We mapped the hell out of that man. Then um split. Wait. Then join. Then split with nothing. Oh, That's all I do. That looks good to me, right? I don't know why we're getting two of the same outputs, but like it should be okay. Oh, We sent it twice. Um, maybe we just run it once. Probably enough. Execute once. Yeah. Sorry about that. Oh Where the hell's the thumbnail now, man? Why are we getting the thumbnail? Yeah, there's the data right there. Yeah, it's not rendering. That sucks. Why is it not rendering? Nope, that doesn't work. Why the doesn't that work? Isn't that the whole idea, man? that you can insert freaking variables like Holy this is brutal. I really want to do the HTML template thing if I could just do code to do it. See, we may have to just like fix all this man. I got all the stuff out. We need to delete this, right? Well, it's now inputting the URL, which is cool with uh the X at the end of it. So, could I concatenate? Nice. That's cool. Source is this. Then I want to concat One more. Is that going to work now? Oh, for sakes. Come on. Chop. Nice. That actually did work. Cool. Very cool. Uh, all right. So, what are we doing over here? Video table thumb container thumb. So, let's use a class of thumb. We'll go source class equals this thumb. Okay. Oh yeah. Okay. And then now that we're all done with that, what are we doing here? Okay, we now send one item. Hello email. my life. What the hell's going on? I think I know what it is. Had like a good thing going here for a bit like 15 minutes ago. Give me that email template, man. What were we feeding in here? Image source large thumb container on the outside. Are we still doing that? source im uh thumb container on the outside. Right? It's image source equals thumb concat class thumb. That's good. I don't see any issues with that. So, why is the email coming out all A quick tip checker online. Please go over here to Yes. Mhm. Oh, that's useful. Come on, man. That's just not nice at all. Why would you break on me? Um h container right video table right remember earlier when this is working fine not that one okay table tab body tr thumb container so it's trd dev TR TD dev, right? Looks good to me. But for whatever reason, when we're turning these on, they break. And also, we're running the same data over and over again. Why is that? We should have new data, right? Like not seeing any output data on this branch. So why is this going two get four filter in two and for the first round it's 90% so I don't fully understand what's going wrong here something is though we pumped out two channels good for both channels what we do is we loop through the Google sheet. Looks good. Then I'll put a filter. Filter keeps one, discards one. So now we have two items left. The two items on the left side. That looks good. right side. That also looks good. Loop one. Yes, we are actually getting them now. Wonder. We're now just going to create a new chat GBT template entirely. One that is much better looking than this. Let's go chat. Go over here. Change this to 04. First string is my HTML template. I am creating a simple daily digest app that includes trending videos from HTML. Above our HTML templates that are sent via email wrapper HTML in the second Still looks like Come on, man. What is going on with these black bars? Okay, just crop the images in point on both sides. Height should be what is the height right now? Grab the height of this. Grab the height. Where's the height? Care for the demo? Probably. Sorry. You would have thought that San Francisco would have better open AI access, huh? Apparently not. Oh my goodness. It's good, but that's too much. Okay, so here's what we're going to do. I'm going to find the specific snippet of code where it actually cuts into my freaking things. Okay, so height 155 pix. So that's way too small. We're going to do we're going to say double. We're going to double it 310. Boom. Is there anything else that's 155 here? No, this is good. Supposed to be good, but yeah, it's too much. Okay, multiple undefined. So, we also need the multiple. Where's the multiple average? So where we get the multiple we're going to do is concat uh JSON dot um JSON dot views divided by JSON average. Okay. All right. This going to be it. Oh yeah, sorry. We need to round it. Uh, how do we round this? probably. Okay, I'm going to run this one more time just in one second and then we should be good. Then instead of two or three, why don't I add Well, we should probably do like five or something, right? Should probably run this once before I start saying stuff. Have it screw up on me during the demo. Let me just make sure that Can I still see my mic and stuff? Yep. No issues there. Looks good. We'll run it now once. Clean as hell. Nice. That's perfect. Multiple one. Oh god. One more. Y'all ever done this? Okay. Anything around decimal places. Okay. We just need two. You now built a trend detection system that automatically identifies viral content opportunities. And that's pretty cool to me. Next
LinkedIn Customized Outreach System
we're going to build a sophisticated LinkedIn AI outreach system that creates targeted Apollo searches using natural language. We're also going to enrich prospects with detailed profile data. We're going to generate personalized icebreers. And we're also safely going to send connection requests through automation. Lots of agencies, including LinkedIn lead genen agencies, AI automation agencies, content agencies, they depend on LinkedIn. And being able to automate that process saves hundreds, if not thousands of dollars per month. So, the system's going to handle the entire LinkedIn lead generation flow from prospecting all the way to outreach. You can charge $2 to $5,000 for the system depending on volume and the number of accounts you have because it technically automates what used to require a full-time position. Okay, so here's a demo of the system from start to finish. We start with a form that I fill out which I'll show you in a second. That form basically asks for us to define the search parameters in natural language. So I am going to get to say I'm looking for creative agencies between one and a thousand people that are in the United States. And it will actually create an Apollo search URL for me completely autonomously. We're then going to generate a search URL here. Then run an ampify actor. Um I'm setting a limit node here. For those of you that don't know, limit node is just sort of a testing node. Allows me to set lower limits so I don't overwhelm an API. We're then doing a personalization step right over here. Um then I'm adding this to a Google Sheet database. You can find this Google Sheet database um right over here. is very simple. We're just logging the ID of the LinkedIn account, the first name, the last name, the full name. Sometimes I like to have that the LinkedIn URL, the title, the email status, the photo URL, and then the icebreaker. And then finally, we are aggregating all of that data so that I could send an API call to a tool called Phantom Buster. And I'm going to cover all this stuff in a second, but for now, let me just show you guys what this looks like in practice and what the end result is. I'm going to test workflow. It's going to ask me to define my audience type in plain English. So, I'm going to say um I'm looking for creative agencies around one to um let's say 100 people staff across the United States. I want the decision makers. So, come up with a bunch. Then I'm going to click submit. Now that I've done this, what's happening is it's going through and it's generating a search URL. Now, the way that this works is the service that I'm using basically takes as input a giant search string. And so, I'm having AI generate the search string. The search string is ultimately what is going to um allow us to do said search that looks just like this. I'm going to copy this over. If I paste this in, um what this will do is it'll actually go through and then get decision makers that are within my custom audience, in my case, 967 people across the United States. If we proceed with this, it will then um run uh what's called an appy actor. It's a service that we're going to be using to scrape this group of people. That appy actor I can find back over here. What this is doing is it's going and it's identifying who these people are. It's extracting their email addresses completely autonomously and then it's also getting me a bunch of additional data on them. Okay. After we're done with that, we then pass through a limit node. We have an AI model here, GPT4, that actually goes and creates customized ice breakers for the connection requests. And then what we do is we'll actually dump that into a Google sheet before aggregating that and then triggering a Phantom Buster agent. Phantom Buster is the tool that we're going to be using to grab the data from this Google sheet right over here and then actually physically produce our LinkedIn connection request over here. Okay. And what that looks like on um their end is we are now running this Phantom Buster auto icebreaker connect and it's actually going through and it's simulating real human activity in order to send the message essentially and the connection request. Finally, as this proceeds down the list, we will have the actual data right over here alongside the specific status whether or not this has been sent. Um, and so we have actually gone through and we've sent a bunch of connection requests at various times of the day to these various people using the system. And it was all done 100% automatically. So I want to make something super clear. As of this moment, I've not actually built the system yet. I wanted to show you guys what a live real build process looks like from start to finish by somebody that actually does this for uh for a living on a daily basis. I think right now on YouTube it's really fancy and popular to like put a finished product in front of people and be like here's a system here's how to put it together but that people don't actually show what like the live development process looks like. In reality is filled with a lot of detours, a lot of ups and downs, a lot of um guesses that you know don't actually end up panning out. And you know I want to show people how to actually build systems that make people money. I don't just want to show people like a finished kind of sanitized version of it. Um, so that's why you guys are going to see me do it all from scratch and that's why, you know, I'm structuring this video in this way. It's very important to me not just to like show somebody a picture of the Eiffel Tower and then be like, "Hey, now that you've seen the picture, you know how to build it, right? " It's like, "No, I actually want to I want to show people the building process. schematics and the diagrams, if that makes sense. " Cuz uh, for the most part, that's my audience. So, yeah, uh, this is the road map at this point in time. Basically, what I'm thinking I'm going to do is I'm going to start by scraping Apollo leads using Apify. Then I'm going to enrich leads with personalizations and I'm going to send a Phantom Buster for LinkedIn DMs. Now, if you don't know what any of these platforms mean, I'll explain them to you right now. Apollo is basically a big database that allows us to get a bunch of information based off search filters like dentists in the United States with 1 to 50 staff members. Okay, the issue with Apollo is it's a very expensive database. And so instead of me just getting leads directly from Apollo, what I'm going to do is I'm going to use this tool called Ampify, which is kind of a scraper, which allows us to plug in an Apollo URL, and then it goes in, it actually like scrapes the HTML of the page to find us the leads. Okay, so it's kind of a hack, but this is basically what everybody's doing right now to scrape um scrape Apollo. And the reason why, you know, everybody's saying that that's okay is because Apollo is just scraping LinkedIn Sales Navigator. So it's kind of like, you know, scraping the thing that scrapes the place that scrapes, you know. Um anyway, so this is our scraper. I'll show you all these platforms in a second. Then OpenAI is obviously like our AI tool. And the reason why we're using an AI tool here is because um we need to personalize the messages that we're going to be sending to people. You know, after we find the people, we're also going to get a bunch of information about them. We're going to figure out where they live. their interests. job titles and stuff. Well, if you actually just feed all that stuff into um AI, you can have AI write something that seems pretty customized. Like it's not like, "Hello, dear person. I would like to sell you stuff. " It's like, "Hey, Peter, you know, saw that you went to, I don't know, like the U of A. Um, that's super cool. I love that you did X, Y, and Z, and this may be totally out of left field, but I thought we should connect. " It's something like that, right? It's obviously I'm going to write it way better. Um, but just to give you guys like give you guys some insight into that process, that's basically what everybody is doing right now in any sort of cold air outreach. Anyway, after we have the personalization and all the lead data and the LinkedIn profile URL, and basically after we've done these first three steps, okay, what we need to do is we need to send the outreach. And so I'm going to be using a tool called Phantom Buster to do that. And finally, we need to do it, you know, on LinkedIn. So I threw in LinkedIn over here. Um, but let me make this clear. Uh, this is a very short moment of time in which you can use all of these tools together in the way that I'm about to show you. So, you know, because we have access and availability to these tools, we can do the really cool thing that I'm about to do. Um, if these tools didn't exist, you could still do it. It' just be like way harder and it'd be a lot more difficult. So, I prefer to use pre-made tools wherever possible just to expedite my workflow. That's sort of like my guiding principle here as somebody that does AI and automation. Okay. All right. So, Apollo kind of looks like this. As you can see, it's literally just a database on the lefth hand side of people. Okay. So, hypothetically, let's say I want creative agencies. I'm just going to type in creative agency as a keyword. This is not the most effective way to do this, by the way, but I just wanted to show you guys what it looks like. And then, um, under job titles, maybe I'll go owner, CEO, I'll go founder. I'll go partner. So, and I'm just typing in like titles of the people that I'm looking for, right? Um, okay. Anyway, I'm just going to leave it at that. And what do I end up with? I end up with 844 creative agencies um in I think I put United States here somewhere. Maybe I didn't. Oh, yeah. Location, United States. Cool. So, that's all Apollo released for. I mean, I could go into more detail about it. I'm not going to just for the purposes of this, but essentially, we can generate a list of people. Okay. And so once I have this list of people, the question becomes, all right, like how do I actually extract something meaningful from this? like get, I don't know, an email address? How do I get like a LinkedIn profile URL? How do I actually, you know, get their phone number or whatever? So normally, you know, I would export this in Apollo, but it costs a ton of money. And so instead, what we're doing is we're going on a tool called Appify, which is basically a big library of scrapers that people have put together. And we're going to find a tool that allows us to scrape all these. So I'm just going to run a proof of concept first before I even build automated systems. And I'm just going to run through this whole thing manually. Um, so let's do it. Let's do Apollo. Pump that in there. And I'm just going to use this one here. I think I've used this one many times before. So I just want to verify that I have access to it. Yes, I do. Okay. All right. So I'm just going to paste this search URL in here. This is just how the tool works. It costs a$120 per thousand leads. So if we want to scrape a,000 leads, cost you $120. I should note that you can only do a few um like a 100 or 150 LinkedIn connection requests totally cold per week per account. So, like you can think of this as basically $1. 20 will give you enough money to run this LinkedIn campaign for a whole month. Um, at least in terms of leads. You obviously still need to pay for the rest of software platforms. But anyway, uh, here we pump in some search records. So, 500 search records, get work emails, get personal emails, find, whatever. I'm going to click save and start and let's just see what happens. Okay, we're running this manually. We're not doing any sort of API yet. Uh, I will put the system together in N8 after, but okay, cool. So, as we see, it's saying it's found 100 results. That's cool. Now it's getting some more. If I go to this output tab, you can see the actual results. Pretty neat, huh? We're getting all these people's data. And oh man, are there a ton of fields, right? There are a lot of fields. I'm not going to go into detail on all the fields, but I'll just show you that, you know, you can just run these searches um basically free. Uh which is pretty cool. And yeah, we just live in a we live in a pretty specific point in time where you can actually do that. Okay. Anyway, I'm just going to manually export this. I'm going to do it as a CSV. Why? because I would just want to visualize this in Google Sheets. And now I'm thinking, okay, like once I visualize it, I can actually go through and I can use AI to do some stuff. So this is my thought process. Just trying to narrate it live so we see what's going on. I built out many similar systems like this. So obviously because I built similar systems like this, I kind of have a feel for like where things are going. But uh I'm just going to import now and then I'm going to go to upload and then drag and drop this file that I just exported. And I the reason why is I just want to visualize it. And I'm going to go append to current sheet. That's just going to allow me to, you know, I just change the title. I don't want to have to redo it all. Okay, cool. So, I'm going to hide these so I don't expose every single person's email address. But, um, as we see here, we now have a big list. We have city, country, departments, department, email domain, email status, employment, um, employment, you know, creative agencies. Obviously, all of these companies have the term creative agency in their title, which is great because I'm looking for creative agencies in this hypothetical. Um, founder and CEO, right? Then we got a ton of other ones, too. I think um Apollo just exports all of the fields about their job history, which is why it's so long. So if people have had like 10 jobs, it'll actually like export all of them. Good god. Okay. But the one we really want is we want LinkedIn URL, which is this one right here. Okay. And as we see, we don't get the LinkedIn URL for everybody. But how long is this list? Um this list is 101. Okay, it's 101 long. I'm just going to scroll up and then select all these. It looks like we got 60 six uh I guess we're counting that one, too. So, 60 I think it's like 66 out of 100. So, we get about 66% of these as LinkedIn profiles, which is great. Um, you know, if I go down to email, I bet you we probably get a ton of emails as well. You always got to check the source data, which is why I'm doing what I'm doing. Um, we got 77. So, we actually got more email addresses than we did LinkedIn profiles. That's really interesting. But anyway, whatever. So, I'm sure you guys can imagine any sort of outreach campaign that you guys run, you guys could send emails and do LinkedIn DMs, right? And that's kind of like the golden um the golden egg, the golden goose. That's kind of like the golden goose egg. You know, if you hit a person on more than one platform, then that's obviously ideal. Um today, I'm just going to be building a LinkedIn system that does that outreach. But I want you guys to know that I've built the omni channel supposedly, that's what they're called, omni channel scrapers and systems and stuff like that a ton of times. um it's no major issues at all and um if there's demand for it then I could show you guys how you like kind of combine these two but okay so now we have a lot of stuff right we have profile fields we we have everything it's great um so the question is where do we go from here well what I want to do now if we go back to our little road map is now that I've verified we can actually scrape Apollo leads using Ampify manually this little blue check mark is going to mean manually the green one will be after I'm done it um automatically we need to enrich the leads with personalization okay what is personalization well we basically need to write like a really small little snippet that we could stick at the very top of our LinkedIn, okay? So that when somebody gets a connection request, you know, it's just going to be a short little message that says like, "Hey, Peter, how's it going? Love your stuff and really want to connect with you. " Okay? And that's what we're looking for. We're looking for something over here. Hello, Fahhem Bernard and Anna. Hopefully you guys appreciate the views. Um, so actually, can I just going to show a little bit more here so I could see if I get an example? You see this from this lovely dude, Enel, who I think is in my Yeah, he's in Maker School. um I think I recognize him. He said, "Hey Nick, I just joined your community at school. I'm excited to start this journey. " Okay, so we basically want to personalize this just like Enel did. Um although obviously you're not going to be able to say that you joined my community, but by doing this, there'll be a much higher conversion rate on the back end. People are going to be a lot more likely to actually click the accept button if they see a message. Like you know, a lot of these other guys have joined my communities and stuff like that. And that's fine. Um but like you know, how much more likely am I to accept enels because I see that he's written me that message, right? A ton. That's the same logic we're going to be using. All right. So, uh where are we? So, we just need to determine that we can personalize this. But, um, I know that LinkedIn has specific limits around how long we can do this. So, LinkedIn connection requests character limit. I'm just going to Google this really quickly. And it looks like we have a character limit of about 300 characters. That's actually pretty that's pretty small, eh? So, okay. So, 300 characters to words. Let's see how long that is in words. So, it's between 42 and 75. So, it's probably about 50 words or so. So, we actually got to make sure that our personalization snippet is super short. Okay. Um, anyway, let me go to GPT40 here and let me just define a little prompt. Um, you are a helpful intelligent writing assistant. I'll say your task is to take as input. You guys might not be able to see all of this here just because I have my face sort of covering it. Wonder if I can make this smaller. No, I can't. Anyway, trust me when I say this is going to be the most banger prompt of all time. Your task is to take us input a um bunch of LinkedIn profile information of a user and then generate a very short, very punchy icebreaker that I can use as a variable in the introduction in my connection request. So, I'm just asking it to do stuff like I'd ask a staff member. To be honest, AI is at that point now where it's intelligent enough to basically fully understand the context. Uh, if you go back to my previous videos from like a year ago, things have changed a lot, but now you can just ask like you would ask for anything from anybody. So, um, return results in this format. I'm going to have it return it in JSON JavaScript object notation. I know this may seem complicated to some people, but um, this just allows me to automate it later and I just want to verify I can do this. So, return um, your results in this format. Let's say icebreaker. Icebreaker goes here. Um, in order to ensure ice breakers are punchy and high quality, make them follow this template. Hey X. Hey name. Love seeing thing about them. I'm also into other thing plausible tie in thought I'd connect. Okay, so now I'm just going to see the length of this. I'm going to go to a website called word counter. So this one's 12 words, right? That's easy. And I think it's actually going to be even better because if I just scroll down here, you see how um yeah, you see how this message is like basically also about 12 words or so. So this is going to appear right before the see more badge. I think it's going to be great. This is going to be super valuable. So um yeah, that's what I'm going to do. Um so I'm going to add this in and then I'm going to say LinkedIn fields. Now I'm actually going to give it like an example of the data that I want it to personalize based off of. And then I'm going to see how it performs. And then assuming all the stuff is good, then I'm just going to pump it into NAND like one shot and it's going to be perfect. And by the way, um this kind of this is a good opportunity for me to talk a little bit about why I'm doing all this stuff like manually as opposed to going in Nad. And the reason why is because um I'm sure I could make it look really sexy and clean if I just did it all in N. But I don't normally actually build systems like this. I will start by just doing it manually at least once and just verifying that it kind of looks the way that I think it's going to look and works the way that I want it to look, works the way I want it to work. And assuming that it does, then I pump it into NAN and then you know I actually work through the automation bits because the way I see it there's the use case and then there's the automation of the use case. Okay, so that might provide a little bit more context and hopefully that uh makes things clear. But okay, so we just need to feed in a bunch of fields here that'd be relevant. So what are we going to do? Obviously we're going to need the name, right? So I'm just going to grab Danielle um who is row number four. Um, just go Morgan here. Then I'm just going to dump it all in plain text. Okay, let's just go Fort Lauderdale. Okay, like this. Um, that's probably pretty relevant. DM Creative Agency, it's probably pretty relevant. Founder and CEO. Oh, so maybe I'll go founder and CEO at DM Creative Agency. Cool. What else is like interesting and unique about her? Employment history. Um, I guess I could include it, but let me just see if there's anything else that might be a little bit better than employment history. I'm just going to drag this all the way to the right. Florida H. Okay. So, I mean, really, I don't have too much information here. I really only have like the company name. Um though it looks like a bunch of interests about the or sorry keywords about the organization. I might be able to feed that in there. Okay. Well, I think in that case like I'm going to have to feed in some of the past employment history, right? That makes sense. Um, in that way my um, you know, my outreach can talk about like, you know, it's I like that you went from doing like sales at whatever to being a founder of your own company. That must be interesting or I did something similar or whatever, right? Like that's basically the vibe I want to go for. Previous experience. Um, and then I'm just going to say Outlast Eyewear. And then, sorry, I know I'm jumping around a lot here, but then I will say account director at Man, I cannot get these all tabs right at Red Ancy. Okay. And then I'm going to have it generate me the icebreaker now. Hey Danielle, love seeing your journey from regional sales director to founder and CEO. I'm also a creative leadership. Thought I'd connect. That's okay. That's okay. I don't really like the usage of the keywords here. Regional sales director. I want um I don't want to use the variables exactly because I want to imply that I've actually like read through it. So either I will lowercase them or I'm going to paraphrase them. Okay. Make sure to follow this template. So what I'm going to do here is I'm just going to zoom out a bit. I'm going to go over here and edit the prompt for thing about them and plausible tie in. Never use the exact variable. Um, information provided in a linked field. Instead, always paraphrase. This makes it seem human written instead of just an AI or an automated message. Let's do that. Okay. I'm going to delete this and we'll just run it one more time. Okay. And I'm also going to provide a little bit more context now because I don't like the result. What we had is, "Hey Danielle, love seeing your entrepreneurial journey with DM Creative Agency. I'm also passionate about turning vision. " It's funny. DM Creative Agency. I guess it's her name, but like I'm about to DM the hell out of this person. I'm also passionate about turning vision into reality. Thought I connect. That seems kind of weird. I don't like that. So, what I'm going to do is I'm just going to like make it super incredibly punchy. Also, make it super short. Don't say stuff like or anything like that. Be extremely laconic. Laconic and Spartan. Okay, let's try that one more time. Well, let's try that a couple more times till we figure it out. Cool. Yeah, this looks pretty good, right? Spotting your creative agency journey. I'm also into entrepreneurial ventures. Thought I'd connect. I mean, you know, it's not like the best in the whole wide world, but that's much better than before. Diving into brand innovation. Thought I'd connect. Cool. That's pretty good. Let's run this again. Startup life. That's great. See, that's a pretty good icebreaker. Building brands. Thought I'd connect. Cool. That seems pretty reasonable. All right. Cool. So, I think like four out of the five so far have been all right. Fascinated by startups. All right. I think like four out of the five are pretty So, I'm just going to leave it at that. That seems to me like a pretty good prompt. Um, which means like my next kind of major task in this system is done, right? Um, the last thing I'm going to do is I'm going to send this to Phantom Buster for LinkedIn DMs. Now, this is kind of interesting, a little bit more nuanced, but basically what we have to do now is we have to take all of this data and we have to like send it to the platform that we're going to be using to actually like trigger the outreach. Now, that platform is called Phantom Buster. The way the Phantom Buster works is you basically pay for um some execution time. just a fancy way of saying that you're paying for like the amount of time it takes for the servers to run. I'm going to go over here to this um specific one that I've put together called LinkedIn autoconnect. And when I build out the whole NN system in a second, I'll um you know, I'll rename it and I'll make it nice and sexy. But basically way that it works is I go to setup and then what I have to do is I have to define a Google sheet and I'll actually have it go down my Google sheet. Okay. So I'm going to do that and then I'm just going to add a column here called icebreaker. Okay. Then I'm just going to make this like one person. It's just going to be this person that I was just doing the testing on, which I think was here, right? So I'm going to delete these two. Then I'm actually just going to delete all the rest of these as well because I just wanted to pull literally one record. Okay, so that's cool. I know I have a ton of redundant fields and whatever, but that's fine. Um, and then under icebreaker, what I'm going to do is I'm just going to feed in this um the data that it just generated for me. Then I'm going to paste that in here. And then voila. Okay, great. So now we have an icebreaker. Now the reason why I'm doing this is because I can now grab this Google sheet and I can feed it in here. Okay. And now I can actually run this using just feed that in. I can actually run like I can actually go and I can do um some LinkedIn outreach. Basically, the way that Phantom Buster works is you will connect your LinkedIn account using a um little Phantom Buster Chrome extension, which they will ask you to download when you actually get up and running with the service. Um, and then from there, you know, I'll just click save. And then here is the message that I will write. Now, I'm just going to write um they allow you to write a bunch of things. I mean, you could say, hey, first name, right? Then it'll pull the first name variable, but you know, I've actually just had AI write me a whole thing, right? So, I'm just going to use that whole thing, which is called Icebreaker. Okay. All right. So, now I'm going to click save. Uh, I'm not going to do anything for email discovery. I don't care about that. They just try and upsell you on stuff. Invitations to send per launch. Maximum 10 per launch, then save. Then launch manually whenever I click on it. And now I'm I, you know, again, I'm starting at the end. Just want to make sure I can actually do the thing that I'm asking for. So, I'm going to click this start button. And once we verified that, we can worry about um, you know, running it completely automatically. So, what are we seeing? We're seeing that I am indeed connecting to the LinkedIn. I'm connected as Nick Sar. I'm going through the whole rigomear roll here. Signing up, opening, sending the DM and stuff like that. It's going to take a while because it wants to um basically like simulate real profile activity. Doesn't want to think doesn't want to make LinkedIn think that I'm like a bot or whatever. So, it's going to take I think it's like a minute or two per profile. Okay. And as we can see, it just updated to one invitation already. Still pending. That means that the invitation's actually gone out. Um, so we are now 100% good to go. We verified that all of this system or all of what I wanted to do with the system works. Now I'm actually going to go through and I'm going to build it live in NAN. How fun and exciting is that? Never forget this step. When you're actually building out systems, make sure you can do the thing manually before you do it automatically. Otherwise, you're putting the cart before the horse. I've seen a ton of people do this and just really waste all their time with it. So I have an edit on canvas here called LinkedIn connection request system. If you think about it, what do we have to do? Okay, we have to start the system by scraping Apollo leads using ampify. Okay. Um, one thing that I think is really fun that I think I I'm going to do today just for fits and Google's is um this URL here, this actually includes all the information of the search, right? So what I think I'm gonna do is I'm actually just gonna have AI generate this URL for me. Like that would be pretty sweet, right? I'm seeing owner person titles equal CEO, person titles equal founder. I'm just going to go um what can you tell me about this URL? Feed this whole thing in here and let's see if we could just extract all the data. Yeah. So, are located in the US, owner, CEO, founder, partner, creative agency. Cool. So, I mean like I'm just going to have AI do this. I mean, how cool would it be if you could just say, "Hey, I want you to find me a list of all the creative agency owners in Texas and California or something. " That would be sweet, right? and then we just like have it done for us. Hell yeah. So, I'm going to go over here and I'm actually going to start it with an NAN form input. Okay. And that's how this is going to work. When a new NAN form input is done, let's call this lead finder uh LinkedIn lead outreach trigger. Insert a an audience for your LinkedIn lead outreach. Let's say LinkedIn outreach campaign here. Then here I'm just going to do like text area and I'm going to say describe your audience in plain English. I'll make it required. And then what I want is a placeholder audience type. Uh I don't know company. Let's do like company type location etc. That should be good. Okay, let's test this stuff now. And I'll say I want all creative agencies in the United States with company sizes between 1 to 1,00. That's what I want. Okay, I'll click submit. And now that I have this, I'm actually going to feed this into AI just right off the bat. So go open AI. We'll go message an assistant credential I'm going to use is February 4th YouTube from the list. Uh I think I'll just use Oh, sorry. I think I'm messaging an assistant. I don't actually want to do that. I just want to do text. My bad. So, message a model. I think what I'm going to do here is I'll just do GPT4. 5. I just want to see how like smart it is. Um you're a helpful intelligent sales assistant. So, I'm defining a system prompt start. That's where I tell the model what I think it is identifying as what I'd like it to identify as. Then underneath I'm going to say your task is to take as input a natural language description of an of a prospect audience. Turn that into an Apollo search URL. Here's an example of Okay. And then I'm going to go back to this and I'll say this URL. Oh jeez, I don't like that. This URL describes a search for people that are Let's do located. So I'm just telling it now that um this is an example of the formatting basically. You can change those fields and only those fields. Return your response in JSON using this format. Let's do Apollo. Let's just call it search URL. Then I'll go search URL goes here. Nice. Then down at the very bottom of this output content is JSON. I'm not going to add another message. And let's just pin this now. Oh, 99. So, we could do some testing with it. Go right over here and let's see what it pops up when I paste this in. And let's see if this is a valid Apollo search URL. I mean, you know, it's kind of like the first thing that you got to figure out, right? I paste this into Apollo. Yeah, this has Wow. Oh, this even broke down like the search size. That's very cool. Um, United States. Uh, I'm not seeing any keywords, though. That's an issue. So, I think like the way that I always do these searches with a keyword like creative agency. Um, I think there's something broken here. Just want to make sure if I delete all these. Yeah, we're still not getting hide filters, show filters. Okay. So, there's some something broken about the way that I did this list a moment ago. So, let me just do this one more time. Um, let's do this. Let's just define my search a little bit more over here because I want to give it more examples. We'll go one to 200. And then we're going to go creative agency. And I'm going to tell it can only These are the fields you can change. Um, and I'm going to say organization locations. Um, keywords, associated keywords. Sorry, one sec. Keywords and person titles. Cool. Do not add or change any other fields. Return your Okay, let's test this now and let's see if that maybe is a valid search. Uh, no, no. Use the above template. Well, actually, maybe this is good. Let me try. All right. Yeah, that did work. location. Number of employees seem a little bit off though. Oh yeah, sorry. I think I need to add number of employees here. Um, one more. Let's go back to where it says number of employees up at the top. So, organization numbum employees ranges. There we go. That was the secret sauce. Person titles and organization num employees ranges. Let's just go back to my search. Just copy this in here. I'll paste that in too. Okay, now I'm going to run a test and I think I should be able to get most of what the information is that I want. Um, if I just copy this now and paste this in. Okay, this search correctly had the 1 to 1000. Correctly had the United States correctly had the term creative agency. Looks good to me, man. Cool. So, we now have a system that can basically you just put in a search term for what you want. it'll come out with an Apollo search URL. That's sick. Okay, so now what do we have to do? Um, we have to scrape the leads using ampify. So, two components to this, right? The first is we need to well, we need to call an appy actor. So, we basically need to replicate what I just did over here with ampify, but we need to do it in naden. So, how are we going to do that? Well, apply has an API. So, I should be able to run this using an API. So, if I want to trigger a run, I'm going to have to view the API reference first. So, how do we get a run actor? Run actor synchronously with input and return output. Okay, so this looks to me like basically what I want. Cool thing about NAND is it allows me to just create a uh a request using an HTTP call. So, sorry, a curl call. So, that's what I'm going to do is see it says curl over here. I'm just going to copy this whole thing. But actually, before I do that, I'm actually going to put in the data. So, I need an HTT um I need a key. It looks like an API key. So, I'm need to put that in there. Then, I'm going to need an actor. Okay? So, see this? This is where I'm going to put an actor in. So, I'm going to go back here and you see where it says actors and then JJ, whatever. This is the actor ID. Um, you'll always find the actor ID up at the top um, between the term actors and then input on Ampify. Uh, there's probably another way that I get this as well, but that's just kind of the hack that I use. So, I'm going to go sorry, I'm going to go back over here. And now what I'm going to do is under parameters, I'm going to feed in the actor ID. Okay. Then the bearer token. I basically need an API key. So, I don't know where I get my API key on app. I've already forgotten. I'll probably get it in settings API and integrations. Yeah. Okay. So, I'm going to create a new token. I'll call this YouTube and create. I now have a new token. So, I'm going to copy this and I'm going to go back to the API specification which was just docs. appify. com. Then I'm going to paste in the bearer token. Okay, that looks pretty good. It looks like I need a body. Okay, it says example from schema or example. So, I don't really know what that means. Um, just body required as an object. So, what is this supposed to mean? Just looking for the term body here now. Well, it's telling me that it's required. So, I'm just going to give this a try and see what happens, I guess. So, let me just Yeah, let me just copy over all the curl. Go back here and go to HTTP request. Import curl. Paste this in. It should now map this. So, it maps it to the specific actor. That's cool. Um, I don't actually have my token in here, so apologies. I kind of wasted our time. I'm just going to copy this now and feed it directly in. So, this is where the API key token would go. Go authorization. And then there's this format like bearer token format, which is what most um services use. So, just make sure the bearer starts with a B, capital, and then there's a space, and then there's your actual token. Okay. Okay, so we need to do a post request. Good. Uh JSON body parameters are foo and bar. I don't really know we're going to feed in the body to be honest. I don't really know about this follow redirects thing either. So I'm just going to take that out. I think what we need are we need some query parameters, right? I think maybe we don't. I don't know. Anyway, in situations like this where I don't know, what I do is I just run it and I see what happens. And then sometimes it'll tell me what the issue is. So input URL is required. Okay. So we need in the body probably we need input. Um I'm just going to do this in JSON. I'm going to go input and then I'm going to go URL. Okay. So then I'm going to go over to my sales navigator search URL which is right over here. Uh, nope. That kind of stuck at the very bottom. Just take that back in. Paste this in here. Okay. So, that should now have the URL. Uh, field input. Now, it's still saying that it's required. So, there's probably something that I'm um persistently messing up here with the format. So, let me go back to my actor run. Okay. Go back here. And then, sorry, where it says JSON. Let me just get all this. Okay. This might be all I need. I don't know for sure. We're going to give it a try, though. We're going to test this. Yeah. So that is so the JSON here is just the input because you know it's executing. It's taking time now. So now I don't actually know if it's running right. So let me just exit out of this stuff. I'm going to go to runs. Okay, cool. So it is running. So now verify that I've just triggered it inside of uh inside of N8 and now it's running in Apify. Fantastic. So basically what I need to do is you see this big URL here. Well, we don't need that. What we need is we need this, right? So, I'm going to feed this in. And total records. Uh, I mean, you know, you can put however many you want. I'm just going to do a thousand. If you leave that blank, I think you get all of them. Personal emails, work emails. I think this just means it takes a little bit longer. Then, while this is working, let me just do some renaming. So, I'll say run ampify actor and get results. A little bit longer than usual. Uh, I think I might have just broken this. Yeah, I think I just broke this by renaming it. Okay, maybe you don't rename this live. That might be bad. Anyway, um I'm then going to call this person uh not personalize. I'm generate search URL. Okay. I'm going to save this whole thing. I'm obviously going to have to rerun this now because I've just broken my whole thing. Kind of unfortunate, but it is what it is. Um let's just test the step again. And I'm just going to pin the output next time. So, let's just make sure this works. Feed that in again. Yeah, persistently working, which is nice. Can pin this now. Let's feed that in right over here. Test this again. So, my memory limit may be an issue. We'll see. It's executing. So, yeah, we're running a separate query. I'm going to abort the first one. I'm not going to give it 30 seconds. I don't care. I'm aborting you. Um, cool. And now we just have to wait until this finishes basically until we get all the results. And then when we get the results, it should populate on the right hand side of this. Okay, looks like this did output but it returned something empty. Looks like the reason why is I probably used the wrong um actor. I used run actor synchronously and return output. I didn't do run actor synchronously with input and get data set items. That's the one that you want. So woe is me. Um but I'll basically have to rerun that puppy is what it is. Um, I have the actor ID hardcoded here. So, I'm just going to paste this in my URL bar. Copy this over here. Paste this in here and go back that way. And then just for records, I got to do 500 cuz I just waited like 5 minutes to do a th00and. And I don't really want to wait that long. So, let's test this again. I'm seeing that it is indeed executing. Going back over here to runs. And, you know, we are now doing a run, which is nice. Um, yeah, looks like the run is running. Okay, great. And that looks like it just wrapped up. We got 500 items because I went back and I changed the number of total items to 500. Um, and now we basically have everything that we need in order to proceed with and then complete this whole flow. And as you can see, not super complicated. We had one form submission, one uh module or node to generate a search URL, and one module node to run an Appify actor and then get the results. And because I never ever want to have to do that again, I'm just going to pin this. And what I'm thinking is by me pinning this, I'm just not going to have to run that same actor over and over again. I'm not going to have to like have those crazy weight times. That's an important point for me to make more generally. Like um I find a lot of people, they will um test stuff using like repeat manual inputs over and over again. So if they're testing a form or something, they'll actually fill out the form every time. And it doesn't seem like that adds a lot to the process, but consider I might test a flow 20 times over the course of my development. If I test 20 times and every time takes me additional minute to fill out the form, I've basically like added 20 minutes to my whole workflow, right? And as you see, I build these pretty simple but very high ROI systems like over the course of an hour or so in a video. Um, if I were to do that, I would materially improve increase my production times um, impacting them like 20 30 maybe even 40%. So, I just like to pin data wherever possible. Helps avoid like a lot of the BS. Okay, now that I have this Ampify actor um, output, we we've got a bunch of items here. I'm going to go back to my road map. What do we really need to do? Well, next up is we need to enrich these leads with personalization. So, that's going to be pretty easy. You know why? Because we've already done most of the work. I'm going to go to OpenAI and then I'll go message a model. Okay. And then, if you guys remember, I already wrote most of this prompt. So, I'm just going to copy and paste this prompt right over here. So, I'm going to select my own credential. If you don't have a credential, make sure to set it up. You need to just copy over your um OpenAI API key. I'll select GPT4. 5. I'll paste this in as my system prompt. And then I'll just move down and copy all of this as my user prompt. Okay, I'll make a couple of changes. Why? Because I don't actually need these lines and stuff. What I can do is I can actually just take this and feed this in as an additional user prompt instead. And here I can feed in an assistant prompt, have it give me an example, right? Because I wanted to. Then I can go back here, feed in another user prompt, and then this is going to be my actual live data, right? So what information did I put in? I put first name. So I will do first name here. Oh, that's ugly. One sec. I'm going to go to expression, paste this in, then I'm going to call this linked in field. You can, you know, actually add all of the information in um like in JSON if you want, but it doesn't really make any difference, right? And in doing this, I get to save a couple of uh tokens, which may not make that big of a difference in isolation, but it does definitely make a big difference. Um, if you zoom out a little bit and like look at it over the course of, I don't know, a month of running this, a year of running the system. So, I need to find the city. Is it city? I think it's city. Yes, city is Grand Rapids. So, I'm just going to grab this here. Paste this in right there. Then, what I need next, I need job title and go title, creative director. That's cool. At and then the company name. What is the current company name? Headwind agency. It looks like the um employer employment history zero is always that. Then previous experience and I'm going to go co-founder COO at mission 3 media. That's cool. Um, let's do one more. We'll say producer at self-employed at least for this guy. Um, feed that in. Okay, great. So, what does this look like now? LinkedIn fields grand whatever Grand Rapids. Okay, nice. Self-employed. Beautiful. I'm then going to output my content as JSON. Go test step. And from here, our output should generate some cool personalization that we can then also map directly because it'll be um JSON variable. Should note that I'm using 4. 5 preview for this. You don't need to be using 4. 5 preview. I just like to use the best models for stuff. Why? I don't know. It's a flex. I like flexing. See these muscles just screwing around. This one's taking a little bit longer than usual. Uh might be because of token usage or something on my account. I'm not entirely sure. So, when I run into issues like this, just going to see if I could stop this. Doesn't look like I can. Okay, cool. I did end up stopping this. Maybe I'm just going to use Okay, let me just try rer runninging this now with that test data. Okay, once that's done, we've leads personalizations. Then we just have to send to Phantom Buster. And the way that we're going to do the sending to Phantom Buster is I'm going to um just connect to their API and I'll send like an API request trigger with the uh with the Google sheet that I'm dumping to. So actually there's an intermediary step here where I need to dump stuff to Google sheet which I haven't really put together. Um but we'll talk about that. Maybe actually I guess what I should do then I should probably dump leads to Google sheet, right? Then four should be send to Phantom Buster. Send sheet to Phantom Buster for LinkedIn DMs. I trigger Phantom Buster for LinkedIn DMs. That's probably what I'm gonna do. And I mean, this now looks kind of weird, but copy and paste that everywhere. Yeah, this is taking its sweet ass time. So, I'm not really sure what's going on with that, but as I'm sure you guys know, um, a lot of the time when you make an automation, there will be platform bugs or some minor issue with the nodes or the modules. I'm going to just change this model and see if that makes it snap out of it. So, we're just going to go GPT40 and then I'm going to test this. Uh, actually, looks like I have a bunch of additional new lines here. I don't know why. Should probably remove that. Check this out here as well. No, that's fine. How about this one? Does this one have any additional new lines? Can't really drag it open, which is unfortunate. I guess I have to give it a click. Well, looks pretty good. Yeah, no problems there. Oh man. I'm so stupid. It's cuz I'm doing 500 items. Duh. Oh my god. I'm probably racking up my Man, that's crazy. Okay, so what I need to do is I put a max items here. Just set max items to one. That way it's only going to generate one icebreaker. I was trying to erroneously generate 500 ice breakers. Don't be silly or stupid like me. Anyway, now that that's done, uh, we verify that this does in fact work. I can go back over here now and I could go and I could check Enrich leads with personalizations. Duh. Okay. Next up, we need to dump this to a Google sheet. So, what am I going to do? I'm going to go to sheets right over here. Going to um append row and sheet. I'm going to select my credential. I already have one called YouTube, but in your case, you might have to like connect it and do your OOTH and stuff. Then the operation is just going to be append row. I'm just going to be adding these to a new sheet, right? Append or update a row. Sorry. I'm going to make a new sheet. So, we'll go sheets. google. google. com. And then what I'm going to do is I'm just going to go to my account, which I know has a connection, which is this one here. Then I'm going to create a new one. This is just going to be called leads. Um I'm going to go to document. Look up leads. See if we can find something. I can. The sheet I'm going to choose from is just going to be sheet one. We're going to map each column manually. Now, can we just map automatically? That'd be great. Just dump literally Oh, no. No, we can't because we're only going to dump in this stuff then, right? H I wonder can I just grab I would like to have all of this information automatically in the Google sheet without me having to manually add every um manually add everything. Otherwise, I'm going to have to manually add everything. That'd be brutal. Yeah, that'd be kind of crazy. The columns in Google Sheets. Oh, I get it. I need to set column names here. Okay. So, what I'm going to do is I'm just going to set these column names. Create a CSV of these header names. Let's see string. These are names that I can paste into Google Sheets. Just going to paste this in. And then what I want is I want LinkedIn URL. And then I also want icebreaker. And I mean you can put in as much of this information as you want. All of this additional information. I don't really want to. I just want to just, you know, get this up and running. So that's what I'm going to be using as an example. So this now has all of the sheets. It's telling me that one of the headers is twice. So, I'm just going to go to data and then I'll go split text to columns. So, I have two, yeah, I do have two instances of LinkedIn URL. That's okay. I'm going to go over here and now I have my leads sheet, which is nice. I'm going to go back and now I'm going to go map each column manually. And then it's going to match on ID. What I want to do is can I just actually maybe I can just go map each automatically. I don't know. I don't know if this is right. So, let's just get this test. see what happens. Just supposed to be adding a single row in here. And it's not. Um, and the reason why is because it's grabbing data from the open AI node instead of the previous uh limit node, which is what I want. So, I'm actually going to map each column manually. It's going to find all the ones for me. Then, what I want is um under this, I'm going to use ID for ID. First name is obviously going to be prank. Last name is going to be whatever. Um, full name, I guess. Let me see. First, name. Then LinkedIn URL. So, I need to find the LinkedIn URL somewhere. Should be right over here. Then, what else did we have? Title. Let's go. Title email status. I don't really know why I put that in there, but whatever. basically just mapping a bunch of irrelevant fields right now just to show you guys what the process would look like. Then finally, icebreaker. The icebreaker would be the um message from the open AI node, right? Okay. So now we're going to test this. Giving it a go. You can see the icons changed here. And voila, right? We've now dumped in the records. That's nice. So now that we have this Google sheet, what does this mean for us? Well, we can actually just take this Google sheet. I'm going to go editor. Maybe I'll change it to viewer. Actually, I don't really want you to uh somebody always on from my YouTube, somebody will always find some of my sheets and then like come in and then just like draw dicks on them or something. It's hilarious. Whoever you are or whatever group of people you are, keep fighting the good fight. I'm just going to uh I'm going to start locking these down a little bit more. Okay, so that's that. Now that I have this Google sheet, what I do is I just go into Phantom Buster. Okay, right over here. I'm going to, you know, I go to LinkedIn solutions LinkedIn and then down here I go connection request or auto. It'll be an auto connection sender essentially. So I don't know where this is somewhere here. Might be connect request auto invitation maybe. autoconnect. There we go. This one here. So, I'm going to use this. What I'm going to do is I'll change the input into a spreadsheet URL. So, now I'm going to paste the spreadsheet URL in. Okay. The value of this is it's publicly accessible. Then from here, the name of the column containing profile URLs is going to be this LinkedIn URL one. I'm going to keep all of these columns in my output file because I'm going to want them. Now, I then select my LinkedIn account. So, in my case, I'm going to select this one here. And I should note that if you want to send the LinkedIn um messages with like the um customized connection requests, uh I think to anybody that's not like um a second connection or something, you need a LinkedIn sales navigator subscription now. I have everything that I need. I can just go icebreaker. Right. So, that's what I'm going to be sending people. I go back to save. I'm going to click none for this behavior. I'm going to do 10 per launch. Then I'll just leave this at manual and I'm going to rename this now auto icebreaker connect. And voila. Now what I'm going to do is I'm going to grab the ID of the Phantom which is going to be up here. Okay. And now I have to do an API call basically to trigger this. So we need to go to Phantom Buster and check their API. How do we do this? Uh we go up here. Sorry, up here. There we go. And I think the and I don't know for sure. I'm thinking it's probably agent and it looks like they have a V2 APS. That's what I'm going to use. I'll go to agent. Agent launch probably. Yeah. Yeah, this is the one. Then I'm going to grab a curl. Is there a curl? Yeah, there's a curl right over here. So, I'm going to copy this over. If I just try this, am I going to run? Cannot validate data. Should have required property ID. Oh, yeah. ID of the agent to launch. There we go. If I feed this. Oh, yeah. Okay. So, what I'm going to do is I'm going to go to my LinkedIn agent, and then I'm going to grab the um the Phantom ID, which is up here. I'm going to paste it in. And that's going to now include that data. If I click try now, should be at 200. And then if I go back here, it's probably running now, right? Yeah, it's running. Sick. Cool. So, we just verify that we can actually now run this. Now, is it possible to abort this midrun because I don't actually want to connect to LinkedIn and do all that fun stuff? Uh, it is. Cool. So, what I'm going to do now is I'm going to just recreate my HTTP curl request. I'm going to go back to the API, copy this curl request, go back to NAN, import curl, paste this in. It's now going to automatically map all of these fields. Then, if I scroll down, I have my Phantom Buster key. On Phantom Buster, you need to create an API key. So, you just go over here to API keys, you click create API key, and then voila, you have another one. And then, I'm just going to delete the one afterwards. But you can only copy it once u basically. So make sure to copy it when you can then feed it in as an x-vantobuster- key API key up here. The ID of the specific agent is right over here. I mean it's just mapped all correctly. And now I'm just going to click test step. And it's executing. Now Phantom Buster won't return um a note saying that like you're good to go. It's just going to return a 200 and say container ID. The reason why is because they allow you to check on your container ID later. So what you realistically could do, okay, is you could have a container ID. web hook sent to another scenario or another node, another workflow. You could check that and you could use that to update this record saying sent question mark and then we could go down one by one. So that's what I'm going to do. So I'm just going to rename this now to trigger phantom buster agent just so I have this. So then over here it's going to be add to Google sheet. Wonderful. This here was um personalize outreach. And now that we have everything, all we need now is we need a web hook trigger that basically triggers another portion of this flow. So a variety of different things we could do. Um what I'm going to do is I'm just going to add a separate node or a separate workflow. I'll say trigger phantom buster agent. So I'm going to go number one. Then I'm going to have another number two. And the number two is going to basically watch for the completed run. I'm going start with a web hook. I'm going to paste the title in. I'll call this two. And then instead of trigger phantom buster agent, it's going to be update Google sheet with phantom buster connect requests. These are getting pretty long. So you can call this stuff whatever the hell you want. But now what I have to do is I have to figure out how to send a web hook. I know that you could send a web hook somehow. So I'm going to go back to dashboard. Go back here. And there's got to be a way to send a web hook, right? Advanced settings probably. Yes, there it is. Web hook. Cool. Custom web hook URL. So now I'm going to go into NAN. I have a little web hook set up. And what I'm going to do is um looks like web hooks will post a payload. when they post a payload, that just means that in order for you to receive it in N to change the HTTP method to post. Okay, if it's get, you're not going to get anything. It's just going to hang forever. So, we're posting the payload. It's going to have everything here with a container ID. This is probably what we're going to use to um to get the data now that I'm thinking about it. So, that's what it's going to look like. So, we have the container ID and we're going to do something else. All right, that's fine. So, um let's listen for a test event. And let's just uh well, is this done yet? I think I aborted this, right? Okay. Well, let's just trigger this then one more time. Trigger me, baby, one more time. Should get a container ID. Cool. And now just while we're listening for this, because this is listening right now, um I believe I can just continue building this. And if you think about it like what I need to do is once I have the web hook I need to get the data from the container ID right so I'm go back to my API reference and then containers let's just fetch output fetch the output of a container right so that's what I want so now I'm just going to copy over this um go back here do an HTTP request import this curl we're going to map it including the ID and everything that I need. And we're hard coding the container ID, right? I think. Let me just go back to wherever that container ID was. Yeah, I think we're hard coding the container ID. I'm pretty sure. And that's going to include the output. Now, the output is going to include a list of all of the records that we've actually sent the request to, which is nice. So, from there, I should essentially be able to do everything that I need to do. Okay. So, I mean, the trigger event's taking forever. Let me just see if I could run this as is. Oh, sorry. All right, it's going to be executing the web hook node. So, let me just add a manual trigger first and then let's connect this. Now, let's exit out of this and I'll click test workflow. Nope, just this. Thank you. It's saying the resource I'm requesting could not be found. So, why is that? Um, there must not be any data. There's probably no data. So, I'm just going to see if we can get the container ID. Is there container ID? I'm not seeing any container ID. Identity, maybe identity ID might be container ID. Not really sure. I don't know if that's what that means. Um, well, it did work. It actually went and it sent the request. That's pretty badass. So, what are we going to call I think it might be this identity ID. Unfortunately, it doesn't look like I can copy it. Doesn't let me. So, that sucks. Um h how am I going to get the container ID of the thing? I guess I could just uh grab a different container. This one maybe. Let's just Let's try this. Okay. Oh, what the heck's this? I don't know what this means. This looks like a dust. Huh? That's the output. Huh. Weird. So, I don't actually know if that's what we want. Do we want the container ID then? No, we want um agents fetch output. Get the output of the most recent container of an agent. Maybe we just need this. Okay. It's designed so it's easy to get incremental data from an agent. Output of the most recent container. Okay. Well, I'm just going to call this then like right. Yeah, let's just do that. Um let's not even worry about all this stuff. Let's just get the ID of the agent. Okay. Now, let's do an HTTP request, but this time we'll do it to fetch output. Okay, so I'm going to copy this over, paste this in the test field. Try it. And now, we did get a weird really weird output. I'm not really sure what that means. Process finished. Spreadsheet is empty or everyone has already been added. Okay. Yeah. No, we did actually get the output. Um, maybe there's something that I'm missing here because it looks like I'm fetching I'm not fetching what I want. Um, I'm fetching something different. Okay, so let me see if I could feed in the right container. H. Basically, what we want is we just want the big list, right? So, how am I going to do that? H if omitted or set to Okay, you know what? Maybe it's actually this. Maybe we go to agents um fetch output. Then we feed this in. Let me try it. Oh, sorry. I'm feeding in a container ID to something that wants something called an Asian ID. That's kind of killing me a little bit. Uh, all right. That's fine. Let's just kind of circle back. Um, I was using a post request or a get request. I don't remember. Let me paste this in now. Now, let me test this. Says it's a bad request. I need to check my parameters. could be that the um ID is not the ID of an agent. So, I'm going to go back here to where I define the ID of my agent and I'll go back here and I'll paste the ID of my agent in. And no, we do get just this weird output file which doesn't actually have anything. So, that makes me think that in order for me to do this, I should probably test with another lead. So, not just one lead, but two leads. So, why don't we just test with another lead? Now, I'll go over here. Let me just delete Frank. And then actually, why don't I just test this on my real data? That'd be interesting. Let's go back here. And then instead of a limit of one, why don't I just do a limit of three. That'll allow three things to move, which will allow us to generate two more rows. basically those two rows should dump here. So just um testing all this iteratively right now. So we should have two more. Okay, now we do with the ice breakers, right? So the rest of the system works fine. And now we just need to send that to this Phantom Buster agent. Okay, so um let's do it. I'm going to send to Okay, actually this is a good opportunity for me to test my um uh my web hook, right? So, why don't I grab this listen for this test event? Then over here, I'm going to test sending all of these results to my web hook. Looks like it's already running, which is a problem. H, it is running. Interesting. I get it. I know why it's running. It's running because we're just doing we just sent three results over. We need to aggregate the results in between these two. So I'm going to use aggregate. Okay, I don't think I need any field name at all. I think I can just aggregate these so that these three items become one item. Let's see. No, I guess we do need a field to aggregate. We'll aggregate them based off of ID. No, I just want to aggregate all of them really. Um, let me just see if maybe there's another field that I need to use that's not aggregate. Um, could I just combine these all into one item? No, I think I do need to use aggregate. Just going to aggregate all item data into a single list. Put it inside of this object called data. And then from here, I'm now going to have one item. And then I can just trigger this once instead of, you know, however many times I've done so. All right. Well, I'm glad that I spotted that. That otherwise would have been a catastrophic error. Um, let's see how our agent's doing. Looks like we're processing two new records, which is nice. So, this obviously understands and is capable of like kind of mediating, modulating, whatever the hell you want to call it. Um, the output. Let me just make sure this web hook is set. Oh the web hook URL is not set. Oops. Save. Save. Save and close. Yeah, this probably ran without that web hook URL. So, that that makes sense. I'm kind of silly. Okay, looks like we did indeed finish that run. Two invitations were sent. I'm not getting a web hook here, unfortunately, which is kind of annoying. So, I'm just going to um leave that for at the end of the video when I do a demo. For now, I'm just going to pretend like I did get the web hook and then I'll just continue manually triggering the rest of this flow. So, um yeah, because we triggered that flow right at the very end there. And then if I go to my execution history, I should see a record of the flow at the very end where the error was. Uh I don't know why. Oh, that looks okay. That looks kind of weird. I don't remember doing that. It's probably this one here. Something is weird happening here. This is like mcking around with the Okay, that might be a bug of some kind. Um, anyway, I should be able to grab the ID of the agent, right? So, I should then just be able to feed this directly in, which is the same as before. We should be able to test it. Should be able to get the data. But no, I'm not really getting the data. Not really wondering. I'm really wondering why I'm not getting the data here. Looks like this is good. I mean, I sent it out, but why are we sending it in this format? I don't want it in whatever the hell this is. I want in JSON, right? I'm just getting this all as one big text string. So, could I get this agents fetch output? Hm. Yeah, that's kind of annoying. You know what? We might not actually be able to do it. It kind of sucks. I think instead what we're going to have to do. So, we're just going to have to go like when it is sent, we're going to have to mark this as xxx basically. Um, yeah. I mean, there's no other way of doing so. I mean, it's not ideal and it's not technically one, but looks like the only alternative to that would be madness. we'd have to parse out all of those people using AI and this would not be reliable. And then um it also cost way too much in order to do it. So I guess we're going to scrap that. What we're going to do is we're just going to keep on with this one scenario that triggers the Phantom Buster agent. And then after the Phantom Buster agent, uh well, we're not even going to have a scent column really. It's just anybody that's here will obviously have just made it through the Phantom Buster agent. So that's what we're going to say. And then yeah, the Phantom Muster agent, we'll just run it um every time we add new people to this list. And then I think we can also probably just run it once a day or something, once every couple of days because it actually automatically has the dduplication functionality inside of it, right? So yeah, I guess there's no real reason to have like a done column anyway. It's not that was a silly thought. Ideally, you should track this, but there is really no need to split it in two, I suppose. because if it is on the sheet, Phantom Buster will automatically ddup it. And so we just basically anything that goes in the sheet will eventually be taken care of, I suppose. And if you want to see how far down you are, you just go to results and you basically just have a list of everybody that you sent to. So yeah, we do have that taken care of automatically. And as we can see, you've already sent some of these DMs. So I'm probably going to get like some actual connection requests from these people, which is nice. Looking forward to meeting you, Frank, Greg, and Atina. So, let me just think. Is there anything else that we need to do in order to make this work? I don't really think so. I think that's about it. Just do that with creative agencies. So, now we're just going to trigger the Phantom Buster and have that go. Yeah, looks pretty good. Awesome. You now have a LinkedIn
Deep Personalization Icebreaker Generator
outreach system that automates personalized prospecting and also automates connection requests. You're already generating a steady stream of LinkedIn connections and conversations. Plus, you're selling the same system to others for over $2,000. We're now going to be building a multi-line icebreaker generator that uses deep website scraping to create extraordinarily personalized cold email openers. This is the exact system that routinely generates 5 to 10% reply rates and helps scale any agency to six figures through cold email. And here's what makes the system so valuable. Instead of sending generic low-v value cold emails with templated variables that get ignored. This workflow is going to scrape prospects websites. It's going to analyze their content with AI, and it's going to use AI and a little bit of prompting magic to generate a few multi-line icebreers that are so personalized that a lot of recipients actually think that you've spent hours researching them individually. The result is a cold email campaigns that actually gets responses and actually book meetings. By the way, if you're serious about turning your NAD skills into actual income, please do check out Maker School. Many of our members use systems exactly like this one to land their first $3,000 AI automation clients within two to three weeks of joining. All right, let's dive into building this cold email system. Again, we're going to turn website data into highconverting personalized outreach. To make a long story short, what we're going to do is we're going to start with an Apollo. io search. Apollo is just a lead aggregator and lead database. Fortunately, it's very expensive to purchase leads directly through Apollo. So, what most people do nowadays is they scrape it using a third party service. For that, I'm going to be using Ampify. And then I'm going to pump all of that through a pretty complicated N8 flow just to show you guys how at the end of it all we can generate highquality leads. So, first things first, let me give you guys a demo of the system. What I've done here is I built a Google sheet called multi-line icebreaker generator. There's a URL column over here on the left and a sheet called search URLs down at the bottom. Then under leads, we get a bunch of information. First name, last name, email, website, URL, headline, location, phone number, and multi-line icebreaker. This is sort of the juice. What I'm going to do is I'm going to feed in the URL to the Apollo search that I showed you guys a moment ago. Okay. Then I'm going to go and start my flow. What's occurring when I click test workflow is it's grabbing the URL that I just added. So that's that big long Apollo search. It's now scraping that Apollo list on Apify. What it's doing is it's spinning up a cloud instance of the actor. That's what they're called on Apify that's going out and it's actually getting me a ton of that lead data. Then we're doing a bunch of data processing to filter for only websites and emails. And then what we're doing is once we have all those websites, we're actually scraping the hell out of them. We're extracting the HTML content and then we're editing the fields. Now, there's a couple more steps here, but ultimately what ends up happening is we summarize the website pages, then feed that into AI. And the end result, if I go over to my leads page here, is as you'll see, we're filling in this multi-line icebreaker column. What this multi-line icebreaker is it's a extraordinarily highquality personalized pitch where you add this to the beginning of a cold email and it's so customized that the person on the other end of the line is going to assume that you've actually read through all of their website and done a bunch of personal searching yourself. So this is the sort of thing that routinely gets me 5 to 10% reply rates on my cold email campaigns. And it's one of the ways that I scaled my own AI and automation agency to $72,000 per month. So you can see, hey Cali, love how L2 makes it easy to filter by acreage. also a fan of your property update email option. Wanted to run something by you. If I was Cali and I received all of this is like the first two lines of my email. I'm obviously going to assume that I've done my research in my pitch. Okay. So, in a nutshell, what we do from here is we just take this data and we feed it into a campaign like on instantly. Instantly is a cold email service, one that I personally use for most of my emailing. What we'll do then if I go to campaigns here, you can see an example one that I set up for website agencies is we pump it directly into a sequence and our end email will look something like this. So in this case, this is for a website design agency. Hey Katie, love KT also graphics. I wanted to run something by you. I'm new to this so please bear with me but insert customized information or icebreaker over here. And this specific pitch has already got me a 4% reply rate and something like over 30 qualified leads that have wanted to book calls or meetings with me. I show you guys all this back end just to make it abundantly clear that the actual work that goes into building out a cold email campaign. If you really want it to crush, it's a little bit more complicated than just scraping a bunch of leads off Apollo and then sending. You know, a lot of the time you need some sort of way to paraphrase or make the content that you are sending to people seem a lot more customized. So that's in a nutshell how the system works. I'm going to do now is I'm going to build it for you. Okay. So I'm going to open up a new NAND panel over here and I'm going to call this deep multi-line icebreaker. We got our canvas right over here. First thing I'm going to do is I'm just going to do a manual trigger. This is just the simplest and easiest thing for me to do and I do it for all of the flows that I'm testing. I'm then going to head over to Google Sheets. What I'm going to want is a get rows in sheet. What we're going to do is we're going to hook this up to that multi-line icebreaker generator Google sheet. And so in order to do that, you need first to add credentials. I've already added credentials, but if you haven't done so, all you need to do is click on that little pen icon and then click sign in with Google. From there, what I'm going to do is I'm going to find the specific document that I'm looking for. So that's multi-line icebreaker generator. Then I also need to select not just the document but the sheet as well. You guys notice either search URLs and leads. What that is that's this tab or this tab. So what I want to do is I want to grab this search URL and actually not just this one but any search URL that I list just so that I can also scale this up. What I'm going to do is I'm going to head over search URLs. Then once I have all of that in, if I just click test step, what this will do is it'll actually grab me the URL as well as the row number which is handy. In N8 I always pin my data. This just allows me to test my flows a lot faster because I don't have to rerun things and it also spares me some API usage. The Google Sheets API is notoriously low rate limit which can be annoying. Okay, so from there we now have the search. What I need to do next is I need to call my scraping service. And what I'm going to call in this video is Appify. So I've already preconfigured this Appify scraper module here. And I'm doing it because if you want to converse with the Appify API, you do have to know a little bit about how it works. But to make a long story short, what we're doing is I'm sending a post request to a URL. The URL looks like this. Okay. And if I make this bigger, you guys will see it. It's https back/appi. appify. com/v2x colactor-id-sync-gets. Now, you may be wondering, Nick, where the hell did you get this? Well, if I just look up Apify API, this is the backend for the service that I'm using. If I scroll down to this right over here, this exact same URL. So you guys could see and this gives you all the specs on how to use this specific API endpoint. But to make a long story short, what I did was I copied in this curl request. I made a couple of changes and then I ended up with this here. I imported my curl. So API requests are sort of beyond the scope of this video, but basically you have to put an accept application JSON header and authorization bearer and then apply API token. If you guys like this sort of stuff, I've recorded a lot of detailed documentation walkthroughs and how to actually like practically read APIs for beginners. So, I'll link that video above. Okay. What this is going to be doing is running our search. So, if I click test step over here, this is going to do now is it's actually going to push this to ampify. And the way that you check this out in Apify is you have to go to their console. But if I go to runs, you actually see there's a live run that is currently now happening because I've sent this request. Now, I should be getting something like 100 or I guess 96 leads. You can see here I demoed this a couple of times. And once we're done, what we get is we get a bunch of first names, last names, email addresses, LinkedIn URLs, and so on and so forth. Pretty cool, huh? Now, in addition, we also get a ton of other fields here, which I'm not going to go through all of them because there are quite a lot, but that includes stuff like the organization name, the URL of their websites, and so on and so forth. And what we want to do here is we want to take the URL of the websites. First, we need to verify that they have a URL, and we also an email. But once we've done that, we want to take the website URL that we're generating and then we want to pump it through some sort of scraper and then we want to throw it into AI to have it tell us something about it. Okay. So the next thing I'm going to do is I'll go over here and I'll pick the filter node. What the filter node allows us to do is allows us to look specifically to see if we have an email address present. So if I just go command F email, you'll see that some of these fields are null. So what I want to do is I want to check to see whether or not an email exists. Okay, if the email exists, then I'll continue. If not, I won't. In addition, I also want to check and whether or not the website URL exists. So I I'm just searching for website URL. I'm going to stick that here and I'll say this has to exist. Now in this case, it's not existing because this particular example just does not include a website URL. What I'm going to do is I'm just going to click test step now. Carry the data forward and we're going to see what sort of output we got. If we feed in 96 items, we keep 28 items. That means that of the 96, we are filtering out 68. And that's okay for us. That's just how Apollo works. You're going to feed in, you know, about 100 or so and you're going to get about 30. Just math. From here on, what we're going to do is we're actually going to feed this into an HTTP request node. Now, the reason why I'm doing this is I basically want to feed that website in that I got a moment ago directly into an HTTP request node. I want to make a request to this website. I want to search for it. In order for this to work though, what we're going to have to do is add redirects and then click follow redirects with max redirects of 21. If you don't do this, I find that uh in practice, a lot of the websites are going to error out because most of them redirect. If I click test step, what we're doing now is we're actually scraping this web page right here, assetrealtyindia. com. In addition, what we're going to need to do is add one other option here. And that is we'll go to settings over here. And then on error, what I want to do is I want to continue using the error output. Basically, I just want this to continue executing even if there is an error. And error, we're just going to do nothing. And the reason why is because not all websites scrapes work. Some websites have protections built in that stop us from being able to do this easily. So realistically, if we feed in, I don't know, like 28 or 30, we'll probably get like 15 or 20 websites that are actually able to be scraped. And it looks like we did. We have 21 items that successfully worked and then seven that didn't. Now, when you add those error branches, you're going to get a success and an error branch. You know, I'm just going to do nothing with the error, but you can certainly do things with the air if you wanted to recoup your costs. In my case, I usually do these things pretty quick and scrappy, so I don't really worry about doing that recouping. For me, if I could do this for 20 out of 100 leads fed in, well, in Apollo, I'm just going to make sure that my audience size is really big, like 10,000. Therefore, I'm going to get about 2,000 of those. So, that's okay for me. After that, what we need to do is we actually need to scrape this data or uh rather extract links from this data. If I click on show data over here, what you'll see is we're getting a ton of HTML. Do you guys see this? In this case, this is a WordPress website, but what we want is we just want to extract all the links on this website. So what we want is we basically want a tags with this href equals to. So you see this is actually a link on the page. In this case, this is kind of an empty link. But if I keep on searching this, we should get stuff. Okay, see this hath. com/authoradmin. Well, that actually is going to take me to a blog page on the website that might actually be pretty important for me to customize. This is a contact page. It might be important for us. Basically, what I'm going to do is I'm just going to extract all of these links. And the way you do this in NAND in a really simple and easy way. So we go to success. Just go HTML extractor here. Now what we need to do is we need to stick the links that we're finding into a key. I'm just going to use the term links. The CSS selector is going to be a the value we're going to return is going to be an attribute. And in my case, I'll just go href. I'm going to return an array and then under options, I'll trim values and I'm going to clean up the text. What this does, to make a long story short, is it just gives us a nicely formatted list of links from the HTML that we're feeding in. So the links are going to look something like this. What we've done now is we've pushed in a homepage to the HTTP request node. And then what it's doing is it's outputting us all of the links on that page. Now, why would we want to do this? Well, logically, the reason why is because if you really wanted to understand a website deeply, you wouldn't just scrape one page like most other people are doing. You'd actually scrape all the pages on the website and then you'd feed it into some sort of intelligence, in our case AI, to have it tell us something about each page before combining all of that into some big summary, using that to generate some sort of custom asset. Next up, what I'm going to do is I'm just going to rearrange all of this data. If we go to this HTML tag now, what we have is we have 21 items. Every one of these items is a big array of links. Now, this is a ton of information, but if you think about it, we also want to make sure that we're grabbing the lead data, too. Like, we want to make sure we get Joshua's information, for instance, here. So, what I'm going to do here is I'm just going to clean this up by adding an edit fields node right over here. Now, what edit fields node does is it basically allows us to add fields and then rename them according to some spec. So if I drag this first name here and then if I do the last name and so on and so forth, then what I'm going to do is I'm just going to output a much simpler version of all this data. I'm basically going to remove all the fields that I don't find important. Now in our case, we're getting a ton of these does not exist or unpin node HTML and execute. So because nadn isn't sure of exactly which record means which thing, what we need to do is we need to unpin them all. And now we need to test this. What it's going to do is it's going to run through that Google sheet again. It's going to grab the Apollo URL. It's then going to feed that into our Apify scraper over here. After it outputs, I think it was 96 or 98 items or something. We're then going to filter out a lot of them. It's going to bring us down to I think like 28. It's going to filter 21 or something. Then out of the 21, we're going to extract all the HTML content. Then finally, we're going to be left with 21 nicely formatted items. And if you are curious what these look like, it looks like this. We have first name, last name, website, headline if they have a headline, location, phone number. In this case, this is empty. And then just a big list of links. Okay. From here, if you think about it, what we need to do is we need to find a way to iterate over this list of links because we have technically what like 22 items here. But basically what I want to do is I want to process David first and then after I'm done processing David, process Zayn. And Zayn, process Michael. And the best way to do that in NAD is using what's called a loop over items or split in batches node. This can be pretty intimidating and annoying to deal with if you haven't ever run one of these before, but I'm just going to delete the replace me node and run you guys through how this works. Basically, there are two routes here. There's a done route and then there's a loop route. What the loop route does is anything that you put in here will basically proceed until you connect the loop route back to the input. So basically for every one of these 21 items, we're going to do something. If you think about it logically, what do we want to do? Well, we want to go into these 21 items and then we just want to take all these links and we want to do some processing on these links. Basically, we want to like run HTTP requests for each, right? So that's what we're going to I'm going to go to loop and then I'm just going to type in well first of all if you think about it these right now are all buried inside of an array right so we have to expand these or split these away from the array. So what I'm going to do is I'll go split out and I'm going to feed in the loop directly into the input and I'm just going to put this down here a little lower so it's nicer. Now unfortunately we can't see previous nodes because that's just how the split in loop node works. Um the fields from the left that I want to split out are called links. So that's just what I'm going to use. And then this done route this is going to trigger after we're done our loop. Okay. So, for now, I'm just going to loop this back in. Just make this super simple. Show you guys what this looks like. And just to make my life a little bit easier, I'm just going to go through and I'm actually going to pin all the rest of these outputs. Okay? That way when I start, it's just going to run immediately over to the loop over items and then just split out all of these. And you'll see what I mean by uh splitting them. Okay? So, we just fed in 21 items. Okay? And now for every one of those items, I want to show you guys what the links look like. Just go down to number one. Notice how now what we're doing is for every run, okay? For all 21, we're outputting a massive list of links. So this massive list of links, which is pretty long, I think it's 30 something. This is for run one. This massive list of links is for run two. three. This massive list of links is for run four. So basically, each of these runs are just a different person and their website. Okay. So now, what do you think we're going to do? Well, for each person and their website, what we're going to want to do is process these links a little. I mean, like, check this out. This stuff is crazy. Most of these are basically exact duplicates for Christ's sake. Also, these are um absolute URLs, not relative URLs. What we want to make a long story short is we want a bunch of links that look like this. Home-valuation y-list-with- us communities leewood real estate. Stuff like this just makes it a lot easier to process using HTTP requests later. So, we're going to have a bunch of relative links. And then we're going to add the initial website URL. So, then in this case, it' be troy homes sky-lists-with-. So, I'm going to go over here and I'm going to click add filter. And what I'm going to do, okay, it's unfortunate we don't have access to that data, but what I'm going to do is I'm just going to make sure that the string starts with an a slash. Okay, and the way that this data is being output, it is being output under links. So I actually know how to reference this. I'm just going to go expression. I'm going to go dollar sign JSON. Okay. Now, if I take this, I pin this and I test this. Okay. And so, what ended up happening is we fed in 270 items here. We looped over all of these links. Then, for all of them, we filtered them out and we ended up with just 121 items. Okay. So, basically what we did is we just discarded a bunch of the links that, you know, didn't have anything. So, these, for instance, are all absolute URLs. We just wanted all the relative ones in the website. Is this the perfect and best way to do it? No, not really. Like, theoretically, we could extract these. I'm just not doing it because I'm a little lazy and I find that most websites just have relative links to begin with. In reality, we don't actually scrape every page on every website. That just be a ton more work than is necessary, especially for those really heavy SEO pages. So, I'm just going to go with like my proxy. Proxy just means like my thing that's close to the right answer, which is just, hey, you know, if a website has any relative links on it, that's what we're going to scrape. Sure, some websites aren't going to have relative links on it, and you can totally adjust the logic to fix that on your own end if you'd like. Okay, one thing I'm noticing now is there's a bunch of duplicates. So, I'm just going to go to remove duplicates. I'll go remove items repeated within current input. Okay, this is going to immediately remove all of the duplicates in the flow. Now, I know how this works. I'm pretty confident it's good. So, I'm just going to move forward. And now, to be honest, we're basically ready to do our HTTP request. So, for all of those that are remaining, what I'm going to do is I'm going to get the URL. Now, I know what this expression already looks like. It's going to be loop over items item. json website URL and then JSON. links. What this is going to do is this is basically going to concatenate the website URL with the relative URLs that I'm pulling here. So for instance, if my website was leftclick. ai, this would be the base URL and then relative URL. That's what I'm doing right over here. I'm just concatenating. I'm just sticking them together just because sometimes there are redirects. I'll go down to redirects as well. And now what I'm going to do is I'm just going to test it on all of these links. Okay, it's going to be a lot of HTTP requests, but it's important that we give this a try. Okay, and what we ended up with was 39 items. So I scraped 39 pages here and the result of these 39 pages are all HTML. What I want to do is I want to convert this into like some sort of language that I understand. And the simplest and best way to do that is using an HTML to markdown node. So I'm just going to drag in HTML. It's going to get the HTML data inside of this node. And then uh yeah, to make a long story short, this is just going to convert this into some sort of workable text that I can actually use. After that, we now have a bunch of markdown data. If you've never seen markdown data, it's pretty straightforward. Just scrolling through here, what we end up with is just a bunch of links again and then plain text. So this just basically makes the token cost a little bit lower. For those of you that don't know how HTML works, it usually I mean it looks kind of like this, right? So to write the word renter experience, you can't just write renter experience. You need to go less than symbol title greater than symbol renter space experience less than symbol/title greater than experience uh greater than symbol. This is just way more to uh tokens than you need. And since we're about to feed all this stuff in AI, we want to minimize the token cost wherever possible. And next up, what we have to do is we just have to feed this into AI. So you do so by going to open AI and then click messaging a model. I've actually already created this message node. So I'm just going to paste it in here just to make our lives a little bit easier. But if I feed this in, I'll run you guys through what this is doing. First of all, we need to connect our credential. So I connected to one. If you haven't done this already, just head over here and get your API key from OpenAI. It'll actually walk you through all of this stuff. Um, it's very straightforward, luckily. Although if you are at the beginning or a rate limit, just note that this may actually push you over the rate limit just because we're going to be sending a lot of requests very quickly. The model I'm going to be using is GPT4. 1. The first prompt I'm going to use is a system prompt that says you're a helpful intelligent website scraping assistant. And then over here is the actual prompt. Okay. And this is kind of like where the AI magic comes in. So you're provided a markdown scrape of a website page. Your task is to provide a two paragraph abstract of what this page is about. Return in this JSON format. Abstract. Your abstract goes here. Rules. Your extract should be comprehensive, similar level of details and abstract to a published paper. That's very important. Use a straightforward spartan tone of voice. And if it's empty, just say no content. This is necessary because some of these scrapes that we're going to do are going to turn up empty. And we just want a simple and easy way to handle them. Okay. So, from there, what we're going to do is we're going to feed in all of this directly into AI and just have it tell us something about it. It doesn't have to be super complicated, as I'm hopefully you guys can tell. All we need to do is just ask it to do this for us. And what I'm going to do here, sorry, I just I ran this one individually. Give this a click. What we see is we get an output that looks like this. This web page serves as a contact page for Hayan Company, Inc., a real estate firm located in Overland Park, Kansas. It provides organizational contact information, including the physical address, commercial phone, and facts numbers, and a link to the company's website. It also invites users to reach out with assistance. So, this just basically gives us some context about what that specific page is. And since we're going to be doing this for all of the pages, you know, I'm sure you guys could tell, we're going to rack up a ton of data. What we need to do now is we need a simple and easy way to get all the data. It's funny cuz I'm saying data and data different like every time I do it. Um anyway, what I'm going to do here is I'm going to get the data and then I'm going to aggregate it into an array and then I'm just going to feed all of that aggregated array into another AI module that says, "Hey, here is a ton of information about a website. What I want you to do is customize a piece of outreach based off that. " So the simplest way to do that here is to go aggregate. And what you're going to want to do is the specific field that you're going to aggregate is going to be this abstract. So I'm just going to grab the input field name here. I'm not going to rename the field. That's okay. And now I'm just going to run this end to end from left to right. Make sure that I have all the data laid out not with pinned nodes if that makes sense. Because if you pin all the nodes and only test off of pin nodes sometimes and it bugs out just because you are sometimes expecting more or fewer records than a later node actually gets. So we're just going to run this end to end now on new data. Give that a try. Colonify scraper over here. Okay, we now have a bunch of items. We're then doing tons of HTTP requests. So, you can see we've gotten a slightly different number of items here. Okay, and it's running and it's just kind of doubling up. What I did is I connected the end to the beginning here. I'm just looping over and as you can see, the summarized website page is taking a little bit longer because it's probably a pretty long page. Okay, and it looks like this is hanging. And I think the reason why is because what we're doing is we're feeding in just a boatload of tokens. It looks like some of the website scrapes just a lot longer than I was anticipating. So, what I'm going to do is I'm actually going to check to see the length of the data here is okay. This is 18,000 characters to tokens. One token is around four characters. So, 18 898 divided by four. That's quite a few words, right? Why don't I cap this out at 10,000? Let's say then if it's greater than 10,000 then I want to do JSON. Sorry, let me go to the expression field to make it easier for you guys. Then if not, I want to do this. Well, slice it from 0 to 10,000. All right, that should be good. And now we're always going to be working with reasonably simple and small data. Let's actually do this even less. Let's just go like 5,000. We don't need much more than that realistically. Okay, let's just touch that formula up. I just added length there by accident. Now let's run this one more time. Okay, and then if we click on this little aggregate button here, what we see is we have multiple runs. And every time that there are multiple links, what we do is we actually aggregate them together. Now, in this case, we haven't actually aggregated any because it looks like the first two people that have run through the system are both working for Hath and Company Incorporated, which is just like the, you know, obviously there's like the same number of like links for each, right? This one did though. Notice how we had one page here, another page over here. And what it did is it just aggregated all of those fields together. In this case, it's pretty long, right? It's 23. That's why it took so long for us to finish. But anyway, what I'm trying to say is now that we've aggregated all the data, what we have to do next, we kind of have to pass that through another AI node, right? What I'm going to do is I'm going to feed this through a second one here called generate multi-line icebreaker where now what we're going to do is we're going to feed in all these independent website summaries into said multi-line icebreaker. Okay. Going to give this a double click now. And the way that this is set up is the same as before up here. And the text is we just scraped a series of web pages for a business called Oh, sorry about that. We don't actually have the text on the business. Your task is to take their summaries, turn them into catchy personalized openers for a cold email campaign to imply that the rest of the campaign is personalized. We actually just tell it what we want and return your icebreers in the following JSON format. Now, what I've done is I've actually written out a custom icebreaker that I know works pretty well. Hey, name love thing also doing/like/ a fan of other thing. I hope you'll forgive me, but I creeped you and your site quite a bit and I know that another thing is important to you guys or at least I'm assuming this given the focus on fourth thing. I put something together a few months ago that I think could help. To make a long story short, it's insert thing you're selling over here. Okay. And I think it's in line with some implied belief they have. I guess the point I'm making, and you guys can do this however the heck you want, but I just have AI like kind of fill in the blanks for me, but I don't do it like with simple templated variables. What I do is I will make it flexible. Like I'll use a variable and I'll like tell AI, hey, fill this in with something, right? Where thing is the thing that I wanted to fill in. This is in contrast to like using actual variables which are procedural and always fixed. If you tell AI like, "Hey, I want you to write this casually in short form. " What it can do is it can paraphrase those things and then it seems a lot more like realistic to a customer which I find. So use whatever the heck you want here. This is just my template. Then I have a ton of rules. Write in a Spartan lonic tone of voice. Make sure to use the above format when constructing your icebreers. You write out this way on purpose. Shorten the company name wherever possible to say XYZ instead of XYZ agency. Love AMS instead of love AMS professional services. Love Mayo instead of Love Mayo Inc. Do the same with locations. Then f for variables focus on small non-obvious things to paraphrase. The idea is to make people think we really dove deep into their website. So don't use something obvious. Do not say cookie cutter stuff like love your website or love your take on marketing. So all of these things are important. This here just makes it seem more humanwritten because humans tend not to write the whole company name. If you name, if you say hello, I love Mayo Incorporated. Odds are you scrape Mayo Incorporated somewhere from the internet. you do, hey, love San Fran, then odds are you did not scrape San Francisco somewhere from the internet. Odds are if you say for your variables, focus on small non-obvious things to paraphrase, then you did not, you know, do all of that stuff. I guess all of this stuff just makes it seem a little bit more human written. Then what I do is I give it some examples of profiles and then websites. Okay, so what I did is I basically ran this exact same flow on a couple of test demo websites. Then I got the profile and then I got a bunch of different website scrapes. Okay, so exact same thing here. I'm just feeding this in as like an example. Then I basically instead of me just telling it what I wanted to do, I fed it an example of the perfect outline for an icebreaker. And then what I'm doing now at the end is I'm actually feeding it on real data. Okay. So just to make a long story short, I'm doing a system prompt first, then a user prompt where I give it instructions, an example of an input, then an assistant output, then finally I give it the actual user prompt with the actual input and output. Once this is generated, if you think about it, our job here is basically done. All we need to do now is we just need to update or add a section to the Google sheet that we had before. So, I'm just going to delete all of these. Go to Google Sheets first, then click append row. Mixing up my platforms here. Just going to use the YouTube credential. What I want is I want to select the document obviously. So, multi-line icebreaker generator. The sheet I'm going to be pulling data from is or I'm adding data to is going to be leads. Then, what I'm going to have to do is map all these fields. And I can't actually hold this data yet. So, I'm just going to give it a quick run and then do it in a second. Okay. From here, what I've done is I've now mapped all the correct fields. So, the first name field is going to fill in this column. The last email field is going to fill in this column. Website, headline, location, phone number, and then multi-line icebreaker. Okay, this icebreaker is the new field that I can't access. So, we are now just going to give it a run. And because I need to loop this, obviously I have to grab the output of this and move this all the way here to the input of the loop. And then when it's done, I don't need to do anything. Okay, so let's give this a try now on some real live data. I'm going to click test workflow. First thing it's going to do is it's going to split it all out like we did before. Then it's going to filter it out, remove all the duplicates, and then start my HTTP requests, format it as markdown, summarize individual pages, aggregate all those individual pages into one array, feed that whole array into the multi-line icebreaker, then it'll add the row, and then it's just going to proceed line by line, and do the same thing. Now, the first two leads in this list were obviously um the same company, Hen Co., You know, you can filter out duplicates in the company website if you want. Didn't do that in this case because I actually think that it makes sense to pitch two people on an individual business. A cool thing there that you could do, which I'm not doing here, is you could also reference like, "Hey, David, I just reached out to Zayn and I wanted to follow up with you as well. " If you do stuff like that, people are a lot more likely to believe that you are a real human being do this outreach, not some automated super cool robot. Okay, looks like we just added another one that says, "Love how you got the Casey trusted partners list dialed in. Also a fan of that local lender vetting approach. " Very nice. We're just about wrapping up our test set here. Okay, so hopefully you guys appreciated seeing me put that system together in real time. I hope it's abundantly clear, but that system is just like a nugget. It's a stem. You can add onto that system, make it arbitrarily complicated if you want. I use some very simple filtering logic here because I just wanted to let you guys do whatever the heck you wanted with it. I also just tend to be kind of hacky in the way that I put things together. So, I will focus on the 80/20 thing that does 80% of what I want, even if it's a little less cost-effective or even if I end up wasting some of the leads. To me, the leads are never the bottleneck here, just because we have platforms like Apollo and Ampify and stuff that let us get an almost infinite number of them. Uh, aside from that though, yeah, like you can squeeze a lot of juice out of the system. You could do a lot more than I did here. You could do a number of things. Instead of just scraping all of the website pages and putting it into an icebreaker, you'd actually like use it to generate an asset. You could build a big dossier on the client. You could have this be like a person research machine where you build up some massive list of information and extract everything about them and then feed that into a big database and then use that database to pitch people in a lot more detailed of a manner than I am. You could generate multiple different types of icebreers. You could test different icebreers against each other. You could blast out to everybody in the same domain and then use other people's information to reference those people in those emails. You could say, "Hey, David, I was looking through Zayn's profile. pretty sure he's one of your colleagues and I noticed X Y andZ really unique cool thing that only a human being would notice. Just wanted to say great work and I had a proposition for you. When you say stuff like that, people again just assume that you're a real human being that sat down and is doing this manually. And whether or not you know your pitch is even that incredible. Usually when you can imply that you're a real human being and you can convince someone of that, they're a lot more likely to take the rest of your pitch
Outro
seriously. You're crushing it. You just built a cold email system that now generates 5 to 10% reply rates through deep website personalization. Assuming of course the rest of your copy is up to snuff. The idea is now you or the person that you are selling the system to has prospects that are reaching out and showing genuine interest in automation services. Congratulations if you guys are still here. You've completed the most comprehensive NAND master class on the internet that features live building. You went from someone that has no idea how to put together an automation to somebody who now has a complete arsenal of both professional-grade AI workflows that businesses are willing to pay thousands for and also the knowledge to build these things from scratch. At this point, you're no longer just a beginner watching an automation tutorial. You are hopefully equipped with eight proven systems to solve real business problems. You understand n foundations. You can build complex multi-step workflows. And you also have some systems that typically sell for 3 to 15,000 bucks in implementation. You can reference any part of this master class just using the timestamps below. So please bookmark this video. Come back anytime you want to brush up on a specific workflow or a specific technique. Now in reality having technical skills is just the beginning. The gap between people who understand nod and people who actually make money with it is not really the technical knowledge. It's execution. It's accountability and it's staying consistent with the daily activities that generate clients. That's exactly what Maker School solves. That is my 90-day accountability program that guarantees you your very first a automation client or your money back. Instead of wondering what to focus on every day, Maker School gives you a clear daily action plan. The whole idea is to eliminate the decision fatigue so inherent in stuff like this. And it just keeps you moving towards your goal in as effective a method as possible. Many members start selling the exact same systems that we just built together within a couple of weeks of joining. and they build revenue streams that deliver consistent monthly income. If you're serious about turning these skills into actual income, I would encourage you to at least consider joining Maker School. Aside from that, thanks for completing this master class and I'll catch you all in the next video. Peace out.