# [Extended Live Demo] Automate a Viral Newsletter Using n8n + Real-Time Web Data

## Метаданные

- **Канал:** n8n
- **YouTube:** https://www.youtube.com/watch?v=SoDip-tvZ0Y
- **Дата:** 22.08.2025
- **Длительность:** 1:15:52
- **Просмотры:** 5,086

## Описание

Join us for a free, live demo with our partner n8n, where we’ll show you how to automate a viral newsletter using Bright Data’s verified node in n8n’s powerful workflows.

The challenge prompt: Build an unstoppable workflow that shows how real-time data can supercharge what AI agents can do.
- 5 chances to win
- Exclusive badges for participants and winners
- Submissions due: August 31, 2025
- Up to $5,000 in prizes

👉 Join the challenge here: https://lnkd.in/dBPMMTRr

## Содержание

### [0:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y) Segment 1 (00:00 - 05:00)

All right, welcome everyone. I think we've already got 440 viewers already connected and live. My name is Angel. I work here at NAND as a staff developer advocate. Once again, pleasure to meet you all on the line. I also have Raphael with Bright Data and Peter as well with Dev. 2. So, I wanted to give them a moment to introduce themselves. So, I'll start with you, Peter. Hey, how's it going, man? — Everyone doing very well. Excited to be here with you, Angel Raphael. look forward to going through this and I'm one of the co-founders here at Dev. So over to you Rafael. — Hey guys, nice to meet everybody. I'm Rafael, developer relationship from Bright Data. I'm here to just explain a little more about what we do and as Angel goes through the build. I'm going to show why Bright Data is a powerful add-on to all of this. — Awesome. I'm excited as well. I've had a chance to work with the team at Bright Data and Dev. 2 to for the past couple weeks. I'm really excited to be doing this live build with everyone here. Our goal today, just to kind of give you an idea, and I think most of you are already familiar with this, me and Raphael are going to be building out a live newsletter generator using N8N, and we're going to be using the bright data node to do some of our Reddit manipulation and grabbing. So, we're going to dive right into it. So, I'm going to start by sharing my screen and we're just going to jump right into it. Unless do you guys have anything else you'd like to share before we get started? — I'll just share quickly and Angel and Rafael will touch on this, but we have a really fun challenge going on right now on dev. to. So, if you are building with Bright Data, if you're building with N8N, we really encourage you to come on board, come to dev, build something awesome, show it off, earn a dev badge, earn up to $5,000 in total prizes will be distributed. So, it's really cool opportunity to build. I'm personally very excited about this because we know and love both platforms and bright data. We make heavy use of them here at Dev for our internal needs. So really excited for this live stream and to see what everyone builds. — Awesome. All right, let's dive right into it then. I'm going to go ahead and share my screen here. We're going to start with a blank canvas. All right, let's see if we can see this here. I'm going to pop our little faces here at the bottom. There we are. I think everybody can see us now. So, let's dive right into it. So, as you guys can see, I'm going to just dive right into it. So what my goal today is to kind of give you an idea of what a live build looks like using naden. So from beginning to end. So our goal today is to use a mix of AI, a mix of bright data as node and a little bit of publishing knowhow on the dev. 2 side. So to that end, what we're going to do is we've been looking online and one of the things we really like is this. I believe it's morning brew or is that what it's called, Peter? believe it's called morning brew if I'm not mistaken. But this is — Yeah, there's tons of personalized digest morning brew axios on and on down the line. But this will set us up well. — Exactly. So to that end, what I'm trying to do here is I want to take the daily posts from Reddit from specifically the NADN subreddit. And what I want to do is I want to pull those posts. I want to feed them to an AI agent using the comments as context. And then from there, I want to send myself an email summary and then publish it to dev. 2 as well. So, let's go ahead and dive right in. And feel free to ask questions in the I'm glad you like the cat, Florian. Feel free to ask questions in the chat. We'll do our best to highlight them and we'll pause if we need to dive into why we're doing some of the things that we're doing. So, the very first step when building out is a trigger. So when I'm working through on the dev side, the first thing I do is I usually start with the trigger manually uh trigger. Now at the end of this, we're going to replace it with on a schedule trigger. So we want this to run every day, but for now, my goal is just to be able to trigger it manually, get this thing running, and iterate on it as I build. So vibe coding, if you want to look at it that way. So let's go ahead and do the trigger manually node. And what I'm trying to do is trying to go to Reddit RN Nadn. There we go. And what I want to do is I want to summarize everything that's in here. Now, one of the best ways to do this in my opinion is via RSS. Now, this is this may not be widely known, but if you put RSS at the end of it of your subreddit, you actually get an RSS feed. Now, RSS stands for really simple syndication. It allows us to programmatically get information. Now, it's not enough information, but it's a good starting point. Now, having worked with the bright data node, one of the things that I've learned is that it helps to have

### [5:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=300s) Segment 2 (05:00 - 10:00)

the links that I want to scrape. So, what we're going to do, let's go ahead and start with the RSS feed and see what we get. So, I'm going to go ahead and paste my link here. hit execute. And cool, there we go. So, I like to start with the JSON view. That kind of gives me something that's a little bit easier for me to understand. So, here, as you can see, we get the title, we get the URL link, we get when it was published, the author, and a little bit of the content and a content snippet. So, as you can see, this is a great starting point. We could probably do most of what we need with this, but what I really want and what I think most people realize is that a lot of the I see John Fred saying, "Use the dark mode. " Yes, I should do that. I have it set to like automatically change, but unfortunately it is a little bit heavy on the eye. So, I'll switch that here in just a moment. But we could what we could do is we could use the content from the post, but I think a lot of the benefit of Reddit is in the comments. So, what I really want to do is feed not just the Reddit post, but also the comments into our AI agent to summarize not just what is being posted, but what is being said about what is being posted. For now, as you can see, we have generate we have pulled out 26 Reddit posts. So, this is great. This is a good starting point. And what I'm going to do next is I'm going to standardize some of these outputs. So, what I need to do is to be able to work within the bright data node, what I need to do is get a list of that I want to get more information on. So I need to format those in a way that works for bright data. And one of the things I it is bugging me. I am going to switch to dark mode. So let's do dev2 hackathon. And let's go ahead and change that up real quick. So here we go. There we go. Very important. Ah, much better. — Much better. — All right. Good. So we have our data here. This is good. We have what we need. Now, let's go ahead and jump into instead of going into the bright or in into the set field. Let's open up the bright node real quick. So, here's bright data. As you can see, we have this little check mark. This is an official node maintained by bright data. So, if you're working within NADN cloud or if you're using the self-hosted version of N8N, you should see this. You may need to install it. As you can see, I've already done that. you'll have a little install button if you're using naden cloud. And what we're going to be doing here is using we're going to batch get data. So what we're going to do is uh web scraper as our resource. So web scraper and we're going to trigger a collection by URL. So let me pull — while you're doing that let me jump in a little bit and tell about the right. So I saw there's some people asking about it. Bright data is data collection enabling platform, right? So we have all kinds of tools, proxies and infrastructures to collect data on the scale on the mass scale. And we created a node for you guys to easily use our tools without having to code anything. Just kind of drag and drop. And it has a lot of capabilities. We're going to be using some of them. We're only going to be using right now a pre-built collector for Reddit comments from what I understood. Right. — Reddit post. Yeah, — post. Okay. So, what this will do is it will actually trigger a collector scraper to open up a browser, go to this URL, scroll through it, and collect all the data that is there. — Perfect. Yep. And just to build on what you're saying, Raphael, so we have a couple of ways that we could approach this, right? So, one of the ways that we could do is we could pass one of these URLs in at a time. So, if since I've already triggered this, just to give you an idea, here's our link. In theory, I could just come in here. Here's my URL. I'm just going to cut it out here. And we could paste it here. But this is going to create a problem for us. So, one of the really cool things about bright data is that I can send in links in bulk. But because of how I've done this right here and something that I like to educate new NAND users on is when you see this number right here, this 26, whatever is flowing from this number to this other node, we're going to trigger this 26 different times. Now, normally that would be okay if we got back the data we needed right away, but in this particular case, this endpoint for bright data is going to take a few moments to actually run. So, it's going to receive the URL, say, "Hey, I got your request. Here's an ID for whenever this job is complete, and we're good to go. " Now, what we could do instead is send all 26 URLs at once as a single

### [10:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=600s) Segment 3 (10:00 - 15:00)

request to be able to get it all in one go. And that's what we're going to do. So, let's go ahead and do that. So, I'm going to disconnect this step right here for just a moment. And I'm going to use one of my favorite nodes here. It's the set node or the edit fields node I should say. And in here, what I'm going to do is I'm going to click and drag the link. Now, we're starting to build our batch JSON object. Now, one of the things we noticed, if we go back into the bright node, URL is what bright data uses to tell me, hey, this is a link. So, we need to match that. So, I'm going to go back to my edit fields node and I'm going to paste URL so that it matches that same format. So if I hit execute, you can see we already have the beginning of what we need here. So we have an array object. So that's what these little brackets represent. And then we have JSON objects within it. So we're still firing them as individual array objects. So what I need to do is flatten this down into a single array object. So to that end, what we're going to be doing is using one of my favorite nodes here. And actually, if my mentor were here, he would smack my hand here. So, I'm just going to rename my fields as I'm going along to help with the building vibe coding process. So, here I'm going to do extract URL just so that we know what's happening on the canvas. And actually, let's make it extract URLs plural. Okay, cool. Now, what we're going to do is we're going to use the aggregate node here. And what we're going to do here is we're going to join them all together. So, what I'm going to do here is I'm going to click and drag URL into our fields to aggregate. And what you're going to notice if we go ahead and fire this off is now we have them all as a single object. So here, if we look, we started out with 26 individual array objects. Now we have one array object that contains all 26 URLs within it. And what we're going to do is pass this right along to the bright data node. So, I'm going to go ahead and connect that in. And what we're going to do now is pass that into our URLs. So, I'm gonna go ahead — also. — Yeah, go ahead, — Angela. I want to also add why it's better this way than instead of running 26 API calls — because we have a smart system that's going to retry something that failed. So, when you're sending a batch, it's going to process it all. It's going to make sure everything is done correctly. If something fails, it's going to retry it and so on. So you will get all the data fully and it actually usually has even higher success rate than individually firing them up. — Exactly. Correct. So that's my goal here is I want to be able to get all of them in one go. Now what I'm going to do as well is do a little bit of transformation here. Currently we have the array object. This actually may work although it does not have the URL. So we're going to have to play with it here in just a moment. But let me just confirm it looks correct. Yep, that looks right. And then what I'm going to do is I'm going to JSON string. There we go. — Now it looks good. — Yeah, that looks a little bit better. All right. So, I'm just going to confirm. Yeah, commaepparated strings. Excellent. Okay. Cool, cool. So, we're going to test this. We're going to I'm going to save. Get into the habit of saving. And here, let's rename this. Let's call this aggregate URLs to single object. There we go. Let's go ahead and save again. And let's see if we get a successful execution. We may or may not depending on the format. So there we go. Excellent. Oh, invalid input provided. So the issue here is that what we're looking at is we don't have URL and then the URL itself. We just have all the URLs together. So we need to reformat this. So let's go ahead and do that live. So let's step. This is the format that we want. We want to be able to see it in this exact format. So URL URL. So, I believe if we change this from individual fields and then we're going to do believe data. Let's see if that changes it up. There we go. This is what we're looking for. So, now let's go back in here. Let's click and drag data in here. And then let's go ahead and change this to JSON string. That looks correct. All right, let's try this once more. There we go. This means success. So awesome. So, as you can see, what we've gotten back is not what we would expect. If I was just playing with this off the

### [15:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=900s) Segment 4 (15:00 - 20:00)

top of my head, I'd be like, what is this? A snapshot ID? Like, what do I do with this? So, this is like a receipt. Like, you're at the store, you've been given a number, you're like told, hey, get in line. And once your order is called, you'll be good to go. We're going to give you — It's like McDonald's. — Yeah. Exactly. You get it. Yeah. We've all had that experience. — Yeah. Exactly. And you can actually check what's the status. You can check what's going on there. There's a few API calls that we can use in order to check things. — Exactly. So, you've hit the nail right on the head. So, how do we do this in N8, right? We need a way to constantly check. Are we done? Are we done? So, that's what we're going to do here. So, first thing I'm going to do is to not overwhelm Brit's server, although I'm sure they can handle it. I'm going to go ahead and pin this. So, every time I run this flow, this is not going to fire again. It's just going to use that same snapshot over and over as my output, but it's already fired it into Bright Data's servers, so it won't do it over and over again. So, what we want to do is we want to create a loop. I believe they're called for loops in programming, but we're going to loop over checking the status. So, here is our loop node, and this is one of my favorite nodes. It's a no comment node. It does nothing. It literally does nothing, but it works great for commenting code. I love it for commenting. So, what I'm going to do is I'm going to call this check snapshot again to see or for success. Want to keep them short. There we go. Check snap. So what we're going to do is we're going to keep looping over and over here. So as Raphael mentioned, we do have we go into bright data where we do have a check the status of the batch. So right here, a batch extraction. So this is perfect. So what I'm going to do is there we go monitor. So as you can see, if we think of N8 as working with puzzle pieces, this is the other puzzle piece right here. So, what I'm going to do is I'm going to fire up to this node right here. So, that data, that snapshot ID gets passed into the bright node. So, I'm just going to go ahead and run. Hold on. I got to clear. There we go. Oh, it won't fire until I turn this off. So, let's turn that off. Let's go ahead and rerun. There we go. Perfect. All right. Cool. So, this is what I was looking for, right? So, what we want here is we want to just click and drag this bad boy into a snapshot ID. Now, this has taken so long because of all my talking that by now it's probably done. So, we're going to go ahead and save. run this. And look, status is ready. Now, this isn't always the case. So, what we're going to do is we're going to use the status is ready as our if check to exit our loop. So, what we're going to do is we're going to add an if right here. And we're going to go ahead and connect. Let's see. We're going to rename this check if batch ready. There we go. And what we're going to do is we're going to do status is equal to and then lowercase ready. So in this case, it's going to be true. Like it is ready. It's ready to go. But that's not always the case. So if it's not the case, it's going to go down this false path. And we're going to add a 5-second wait. So we're just going to tell it, hey, slow down for just a little bit. Calm down. Let's wait five seconds. Let's try again. So, wait five seconds. There we go. We're going to go ahead and hit save. Get into the habit of hitting that save button like it owes you money. So, we're going to go ahead and hit clear. I'm going to run it one more time. Make sure we go out the true path here. Perfect. That's what we want. But now, let's see this in real time. So, I'm going to un uncheck this right here or unpin this data. So, we're starting fresh again. We're going to send another execution to the bright node to see how long it takes to process those 26 Reddit URLs. So, I'm going to go ahead and just fire it off from here. Cross your fingers. There we go. All right. So, it failed. So, now we're waiting 5 seconds. And while it's doing that, let me just tell a little more why a bright data node is important. Because everybody's thinking, oh, I can just send an HTTP request and get the HTML back or I can use some other way of extracting the data. Guys, if anybody ever tried to scrape anything online, everybody seeing captures, antibbot systems, click here if you're a human and all of that, right? So, bright data is removes all the headache of getting blocked. The node will literally self capture if needed. It will make sure that it looks like a human being and it's going to go and get the data for you. Now, we only talking, of course, about publicly available data. So, don't expect it to go behind login and extract something. But as long as the data is publicly

### [20:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=1200s) Segment 5 (20:00 - 25:00)

available, this node will get you the data no matter what. — Exactly. And that's what I've seen. Now, if you those of you that have been paying attention, you'll noticed I made a mistake. So, it ran one time and it quit. So, let's talk a little bit about looping. So we passed like one of the things that I've worked with a lot of low code no code tools in the past and one of the biggest headaches to deal with is looping. Typically with a low code no code tool you have a loop node or a loop block. That block you have to tell it how many times you want it to loop. So in this particular Yes. Thank you Liam. That's where you beat me to it. That's where I was going. What we need to do is NADN has put safeguards into place to avoid infinite loops. We do not want this to run infinitely because if we do it could knock down the server. So what we want to do instead so by default NAN will run the loop as many times as there are objects. So as you can see it says one item is flowing into the loop. It's only going to loop one time. This is really useful because it allows you to control the number of loops that are going to be ran inside of it. Now, in this case, we don't want that. We want the opposite of that. We want it to run infinite number of times. So, as Liam called there, if we open up the loop branch and we hit add option, you'll notice this reset. So, what this does is it resets that number that flows in and it will keep looping and looping until you have an exit. So, here is our exit point. This will exit the loop gracefully when it's time. So, let's go ahead and save. I've turned on that reset option. Let's try this once more. So, I'm going to go ahead and execute the workflow. There we go. So, as you can see, it's failed the first time. It's gone out the false branch. We're waiting 5 seconds. Checking the snapshot again. Excellent. So, we're already on loop two. We're already successful. So, now we're just waiting. We're waiting a little bit longer. So, now we're about 15 seconds for the bright data. All right, that was fast. So, only 20 seconds in order to get a status of ready. Now that we have that, let's go ahead and I'm going to go ahead and pin this again. I don't want to keep using up my Bright credits there if I can avoid it. And what we're going to do is Okay, now that the batch is complete, we can actually get the data itself from Brightite. So, what I'm going to do is hit the little plus sign here. I'm going to search for Brightite once more. And here we're going to do the download snapshot. Ah, here it is. download snapshot content. So, here we go. So, I'm gonna This is going to be the first step we're going to take outside of that if loop. So, what we need is the snapshot ID once again. So, I'm just going to drag it into here. Excellent. And now we can rerun it. It's not going to loop because it's already done. This job is done running. So, I'm going to do is I'm going to clear. I'm going to rerun once again. There we go. We went straight here. All right. Let's see what we're getting. So, we're pulling out data. Excellent. Oh, building. Try again in 30 seconds. Okay, so it looks like it's still running. Let's see if the status was something else. We got zero errors. Snapshot ID. Let's see what we got. Status is building. Let's check the actual loop over items. Do these two match h. Yes, it does. Let's try. — I explain to you a little bit, Angela. What happens is that when the batch is ready, we also give an option on how to export, what the format export is. So if you just have the batch ready, we got the data, we got the HTML, but now we want to convert them into a CSV, JSON. So this is where the building comes in because you can request different format. I'm not sure that is in the note. Do we have an option to select what's the download content in what format? Can you open up the note again? — Yeah, which one? the bright one. — Yeah, — there we go. — Just want to see if format, right? So, you can choose a CSV or JSON. That's the build time, right? We need to convert whatever we collected into a JSON format or a CSV format. By default, we when we collect it, when the collection is finished, it's just the HTML format. So, then we need to parse it and convert it into what is required. So, this also might take a few seconds depending on how big the data is, right? Because sometimes you want to work with a CSV, JSON. — Correct. And I'm just to answer I'm seeing some questions in the chat. There are different ways of doing the polling. I like to do it this way just because it's very visual and it makes it easy for people to see what's happening. And then regarding the question uh why the loop is failing two to three times then having success. So to answer that question, it's not really failing. It's just the status is that it's not done. So I did call it fail in a way because it's like it doesn't have the data quite yet. But what's happening is it's checking the output to see if the status is ready or not. And we'll test it again here in a moment to see if we

### [25:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=1500s) Segment 6 (25:00 - 30:00)

see that once again as far as like the delay in getting the data. And we could always do a settings instead of stopping the workflow use an error output. So now we have this nifty little error output to make it loop again. So if we don't get the output we're looking for or an error of some kind in this case actually it wouldn't generate an error. So, we wouldn't do it exactly that way, but what we could do is do another if check afterwards, but we'll get to that in a moment. I'm going to continue. Let me go ahead and do stop workflow for now. There we go. Is there a call back mode for bright? That might be a question for Raphael. — Call back mode. What does that mean? To get the data, send the request and get the data back. Just wait for it to sit there and get back. — Correct. I think that's what he's saying. Liam, could you elaborate there? — Yeah, a little bit. — Yeah, we'll wait for him to do that. In the meantime, I'm going to continue moving along here. Break for the Yeah, perfect. I'm just reading the comments, making sure we're not missing anything. That's great. All right, so as we can see, we did get the data back that we needed. So, I'm going to go ahead and pin this for now. Excellent. And what I'm going to do is take a look at what I've got. So, this is like my process when I'm working is like I'm just testing. It's like almost a science experiment to see what information I'm getting as my input and getting as my output. Ah, I see. So Liam is asking, is there a way to set like a web hook for whenever a job is ready to have like set up like a predetermined web hook to shoot data back into NAM once it's complete? So having two workflows, one that sends the collection in, another one that receives a call from a web hook once the batch is complete. Raphael does — that def. Yeah, that's definitely a doable. Of course, when you trigger the API collection, you can select it to actually be sent to the web hook. So instead of sitting there and waiting, have it sent to the web hook. Once the web hook receives it, continue of course. — Very cool. I love — as a matter of fact, it supports a lot of different or you can send it to S3 bucket, your Google Cloud Storage. We support all stoages that are out there. — Beautiful. — I'm sure whatever you guys use. — Beautiful. All right. Excellent. I love it. So, as you can see, I'm getting a lot more data than I was getting through the RSS feed. So, I have my title, uh, which we were getting in the RSS feed. We have our description, but now we have the number of comments, we have we have, uh, community name, we have the number of up votes, very useful. Any photos or videos, we have tags, we have related post, but more importantly, we have our comments. This is what I'm looking for. I want the conversation to be part of the AI's context. So this is why we went through all this trouble of instead of just using the RSS feed and creating passing that into an AI agent, the RSS feed doesn't give us enough data and that's why we have bright here to give us more context because context is what makes our AI agent smarter. So to that end, we still have a little bit of ways to go because this is too much information. We don't want to feed all this data, all this JSON into our AI agent because it's going to be a lot of tokens and tokens cost money. So, let's go ahead and clean up our output to just the stuff that's important. So, again, one of my favorite nodes here, the setit fields node. And what we're going to do is we're going to extract essential data. There we go. Cool. So, I'm going to start by getting the title. I care about that. description. Let's go ahead and pull that. I'm going to start by getting the comments. So, in the comments, I'm going to do it as an array. So, where are we? Comments. Right here. Boom. Is it an array? Yep, it's an array object. Excellent. We're going to do the post ID. We may not need this, but it's always good to have it. Photos date posted. I think it's right at the beginning. There it is. Post ID. So, I'm going to click and drag that in there. And the URL. Nothing is nicer than having a newsletter with actual links. Now, the last two things is a little bit more context so that the AI can sort. So, I'm going to do number of up votes. and number of comments. Where are you? I think I passed it. And I like the schema of view sometimes for this as well. — Yeah, it looks much easier to find. — Yeah, there it is. Found it right away with that view. Perfect. All right, cool. So, here we go. So, we've got I think what we need, right? Right. So if I execute this, it's a little bit cleaner, a little bit less data, a little bit more context.

### [30:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=1800s) Segment 7 (30:00 - 35:00)

So our next step is to refine this even more. So as you've noticed, we have 26 items being output here, right? So what we want to do is we want to get to a point where we have one item being output with all of these 26 items pushed together inside of it. But before we do that, we need to convert these 26 objects with one, two, three, four, five, six, seven items within it into 26 items with one item within it. And I'm going to explain why here in just a moment. So what we're going to do is I'm going to do my lovely set node once again. And I'm going to call this reduce inputs or reduce items or no reduce objects to one. Okay. So what we're going to do is we're going to have one field here. We're going to call it post data for lack of a more creative term. But what we're doing is we're preparing the data for our AI agent. So that's the goal here is we want to make sure that when we get it into our AI agent, we have it in a format that the AI agent can read, can manage, can process. So let's take a look. I'm going to what I'm going to do is I'm going to give it a little bit of context. So I'm going to come in here. I'm going to type in title and I'm going to click and drag the title. There we go. First one. There we go. Nice little space. I'm going to do enter. And I'm going to do the description. Hopefully I spelled that correctly. There we go. I'm going to add that in there as well. Now, after the description, we're going to give it the URL. Let's find that URL. Let's collapse our comments. There we go. We have our URL. Next up, we're going to do our up votes. So, I'm going to create one little package with all the information that is stored separately as different JSON objects here. So, let's go to up votes. Number of up votes. We're going to do one and we're going to do comments number of comments. That'll give the AI agent some context as to how much engagement is happening on a given chat. Next up, we're going to actually put in the comments. Now, this is where it's going to get a little complicated, a little crazy, and this is where I have a little bit of experience. And actually, let's change this to comment count. so it's not confused. And then what we're going to do is comments and we're going to start by clicking and dragging the comments here and we're going to talk through what's happening. So I'm going to open this little box right here and you're going to notice this object object. So it's saying hey I've got a JSON object in here and this is what is going to be output. Now, I could get around this pretty easily just by putting dot tojson string and it will give me the JSON. But again, that's not something that I want to do. I wanted to have it a little bit cleaner than even this. So, what I'm going to do, I'm familiar with something called JME path. JMS Path is a like a manipulation for JSON. It's a language for manipulating JSON. So, what I'm going to do, I'm going to pull up this website. So, this is jmespath. org. It's one of my favorite websites for manipulating JSON. And what I'm going to do is I'm going to pass this array in here and give you guys a quick crash course on JMS path. There we go. So, I did the at symbol to structure my JSON a little bit more cleanly. There we go. Perfect. Now, what I like about JMS Path is that I can pull out array objects very easily. The reason I want to do that is because I want to get not just all the comments that are coming in. In this case, we only have one, so it's not really that big of a deal. But on some of these posts that have 10, 15, 20 comments, I want all of them. And if I was to just click and drag the comments in here, like I would get everything within the comments, like the URL. I don't care about the comment URL. I'm not I don't really care about the name of the person commenting or their URL or the date of the comment. All of that is not really relevant to what I'm looking for. I want to isolate the output. So if we use JS path, what I'm able to do is things like there we go. So if I do that, it gives me Yeah, it's easy to see. So that gives me just the comment, but it's not just going to give me the first comment like in NAND, if you click and drag, it'll

### [35:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=2100s) Segment 8 (35:00 - 40:00)

give you usually the first item in the index, which will be the first comment. What I want to do is get all of them. One. All right. Excellent. So what I want to do is pull all of these plus I want to pull all of the replies. So what I might do is I might come in here and get one with more than one comment just so that I can temporarily get an array object that we can actually see. Let's take a look here. comments, number of votes, comments. See, this one's got multiple comments here. So, let's use this one. One moment. Let's go ahead and actually an easier way will just be the number of comments. Here we go. Is greater than let's see if we can have ones that have more than two comments. Excellent. 12 of them have more than I'm just going to I'm going to delete this later. This is just a way for me to get an example object that I'm looking for. So there we go. This is much bigger. So as you can see here, let's go ahead and copy this over to our JMS path. There we go. I'm just going to do the at symbol again just to clean it up. JSON is so much easier to read whenever it's formandatted correctly. — I agree. — Yeah. All right. So, let's go back to our example. Right. So, if I do brackets comment, there we go. Now, we're cooking with gas. So, as you can see here, we have all of the comments. Now, this is great. But how nice would it be if we also had any replies, right? So, if I scroll down, some of these, ah, here we go. Some of these also have replies. So what I like to do for those of you that are not familiar with JES path, I just use chat GPT. So what I might do is go into chat GPT and be like, can you help me generate a JMS path using N8N? Or we won't use NAN for now. I'll show you the structure, but we'll just do generate genius path expression to extract all comment text and the replies as strings. Here is a sample. You always want to make sure to give it a sample of what you're trying to edit. Here's a sample of the data. It's comments and replies from Reddit. — Angel, while you're doing this, I want to address a question that I saw. — Yeah, please. — Wouldn't a Reddit node return the same level of JSON information which Bride Data did here? — And it's a good question. Bright data is not just limited to Reddit, right? Bright data is for everything, right? So here we're just using it for demonstration purposes of its capabilities with versus Reddit. But in general, you can do this with any website X. You want to do it with Facebook, anything else, you can use our node to access data from those channels. I agree. I think for Reddit is not a like the best example. I think LinkedIn might be a better example. LinkedIn's API is a pain. So using bright data for something like LinkedIn would also make a lot of sense. And some of if we go into the dropdown, for example, you can get a better idea. If we go back to the collection, you can get an idea of just how useful Brite is. For example, Amazon. I used to have an Amazon store. And I know how difficult it is to get access to Amazon's product database or reviews. That one is really hard. I know that they limit some of the output. So again, Reddit is just the example we're using today because it's publicly accessible. But you can use Bright's data for more than just that. But again, publicly available. — Public data, guys. Just Yeah. It's very important, right? Uh us at Bright Data, we stress it. We only deal with publicly available data. Exactly. All right, guys. So, chat GPT on the first run took care of what we needed. So, here's the JES path expression. So, again, you don't need to be good at JME path. You just need to know it exists and that you can utilize it to create some really crazy outputs. So, this is the context I want to give the LLM. It reduces the amount of tokens because we're getting rid of all this extraneous stuff, but we're keeping the meat and potatoes. I'm glad you everyone likes it. It seems like yeah, this is being recorded. This will be on live on our video to watch at any time on the YouTube channel. So, again, we want to

### [40:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=2400s) Segment 9 (40:00 - 45:00)

reduce the number of tokens. This is going to do it for us. Let's see how we can implement this expression now in NAND. So, I'm going to copy this expression. I'm going to go back here and we're going to open up our objects here. I'm this little box. And what we're going to do, the array object is JSON comments. There we go. So, now if we scroll down object. So, again, not what we want, but we're going to keep this because this is going to reference the first part of our JES expression. So, I'm going to do dollar sign. Going to do JME path open quote. We're going to do comment or comma rather. I'm going to do the brackets and the at symbol. I'm going to close it just to show you we have our objects once more. Now, if I replace the at symbol with our expression, we'll give it a moment. And then we'll do JSON string. Boom. There we go. So now we have a much smaller output. Now, yes, it is still in JSON, but the AI agent can parse this. The AI agent understands JSON and you can pass this in. So, what we're going to do is every comment now is going to or every post in Reddit is going to be broken down like this for the AI agent. It's going to have a title, the description, the URL, the upvotes, the counts, and the comments themselves if they exist in their entirety. So, we're almost there, you guys. So let's go ahead and I'm going to now remove this if check. We don't really need it. I was just using that to show like a large number of objects. If we rerun up to here. Oh. Ah, we have a problem. So because we did that, we weren't able to run it on everything. Some of these have null comments or no comments, right? So what we can do, Chad GBT to the rescue here, we can tell chat GBT, hey, if it's null, don't run this to JSON string. So I'm going to go back here. I'm going to say that worked great. However, some inputs are null, which do not allow to JSON string. Can you help me fix my NAN expression? There we go. I just remembered how we used to go to Stack Overflow for these answers. — I know. This is so much easier now. Wow. — So spoiled. So now if we go back and copy this and we replace this. Now let's rerun. Oh, no value. Add a question to the variable function. Hold on. Do let's see here. So we'll just Oh, this one actually might work better. So we have a few options to try. So let's see if any of these work better. It's the same output. The key here. Ah, beautiful. — There we go. — Now we're cooking. So there's different ways to skin a cat. Pardon the expression. So as any or chatgvt realized that there were other ways to do it and so it gave me multiple and so I just tested each one see which one works and go from there. Now we are just about ready to pass into the AI agent. O we got 13 minutes so let's move quick. So we still have 26 items. That is still a problem. We need to reduce this down to one item. So what we're going to do the my favorite aggregate node here. And what we're going to do is we're going to click and drag post data into here. Execute. And boom. We now have one item that contains everything. So we can now pass this into our AI agent. So I'm going to do AI agent here. Now we don't have the chat trigger. We're going to define it below. And what we're going to do is I'm going to click and drag this. And in because I don't want an array, I want commaepparated strings. I'm just going to put dot join. And boom, that's it. It got rid of the array. And now we're good. Now, to make this even more delineated, I hate how it just lumps together. I'm going to go ahead and come in here and I'm going to put slashn slashn. There we go. Now, each post has two little spaces. The AI doesn't really need this, but this is more for me, just so that I can see where each post kind of terminates. So, there we go. We've got that. Now, the next step is my system message. So, we need it to generate an HTML output for my emails and potentially for the dev. 2 feed. So, what we're going to do to help give me some to help speed me up a little bit, I'm going to use chat gpt for this. So, what I'm going to do is I'm going to go back into a previous run here

### [45:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=2700s) Segment 10 (45:00 - 50:00)

and I'm going to switch to JSON I have a page size of 10. So that'll be good. That'll reduce what we're getting. So I'm going to do copy selection and I'm going to say something like great. That second option worked. Can you help me now create an AI system prompt to turn these Reddit posts into a morning brew style email newsletter in HTML in order to get a daily summary of all Reddit posts. So again, we need to give it context. Here is a sample of what is input into the agent. There we go. So we're giving it just 10 out of the 26 because whether it's 10 or 26, it doesn't matter. What we're doing right now is having it create the system prompt. So there we go. We're thinking it'll take a few moments, but what I'm going to be able to do is copying the prompt that it generates into our AI agent. So, we'll give it just a few moments. Let me address John's question. Why can't bright data access password protected sites if I have credentials? It's not that we can't, it's we don't. It's illegal. It's against the law and we just don't. Technically wise, it's not a problem. Technically, we can access anything we want to, but there's a law and we do ethical scraping. We make sure that we follow the rules. — Exactly. Yeah, we want to be very cognizant of the different rules and regulations that are in different locations. So, that is definitely very important. Any other questions while we're going through here? Excellent. All right. Cool. Here's a drop in. Nice. Here we go. You're an editorial. I find that using AI to generate your prompt by giving it a general idea is a great starting point and then from here you can reduce down as needed or tweak it as needed. What I want is for it to have an example of the HTML. Let's see if it does that. Here we go. Output one HTML. Beautiful. Awesome. There we go. This is what I want. By doing this, it's going to keep the AI from generating totally different random HTML formats each time. This is going to put the AI on rails in order to ensure that the output is standard, which is key in these types of repeating processes. We don't want the AI to hallucinate random objects or random HTML items. What we want is consistency. By giving it an example of what we want, we get that consistency back and we reduce the amount of hallucination that we get. So, we're almost there. Let's go ahead and let it finish here. Comment spotlight. Awesome. Very cool. Beautiful. All right, we're just about done. So, I'm going to go ahead and copy this prompt and we're going to go back to our AI agent and we're going to override this. So, there we go. So, we have quite the long prompt in here. That's okay. We're going to choose our chat model. So, I've already connected Open AIS. Beautiful. I think mini should work here. We can always use something a little bit more powerful if it doesn't give us what we're looking for. And so, let's go ahead save. Let's go ahead and run it. So, let's see if we get a decent output. And while that's running, I'm going to go ahead and connect my Gmail node. send a message. There we go. Send. And we'll send it to my work email. So, we'll give this a moment to finish. And in my other tab, I'm going to pull up my email so that I can show it live. So, we'll give it just a moment. Let's go ahead and save. And with these large prompts and large inputs, sometimes it can take a few minutes for it to run. Although mini was pretty fast mode. — I know typically it could be stuck if it keeps running here. We'll refresh and try again. Let's see here. Memory executions. We got five minutes. Come on. You can do it. We'll give it a moment. If that does not work, we'll refresh. — Is there a timeout on these notes? There should be let's go into the settings. That's a great question. Max iterations pass through. I did notice some issues on my instance. So I guess there isn't a timeout option, but we could do a fallback model. However, I do

### [50:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=3000s) Segment 11 (50:00 - 55:00)

think it ran. I just don't I think there's something with my instance of N8 and that. Let's try run it one more time and if not, we'll change the model. This is usually the Yeah, it is quite a bit of time for it. That's surprising. I ran something similar to this yesterday and it didn't have any issues. So, let's see. I'm going to change the model up. Let's do anybody have any recommendations? We could do 4. 1. — Maybe just try GPT5 under the idea they're giving all the processing power to that model right now. Yeah. Should we use nano or five? — Let's do five. — Yeah, let's see. — For the purpose of just running one prompt trying to save money right now. — Exactly. Let me turn this off too to ensure. — Let's rerun it. We'll give it some time. The other thing is we might want to look at our executions and see if it actually did finish. Oh. — Oh, no. That was prior to it. And I think this one is the one that Oh, let's see. We got an error. Oh, timed out. Yeah. Let's see. Something's wrong on the Open AI side. — Let's see if I can generate a Gemini. Gemini unfortunately has trouble generating static style. It's like a wild horse. — But yeah, hold on. Google LM Studio. — Let me generate an API key. — No, they're saying there's an issue with the OpenAI API. — Oh, no. Is it down? Of course, it had to be. — Okay. — All right. Let's open. — That is the nice thing about NADN here is in theory we can swap it out. So, let's go ahead and disconnect Open AI. Let's get our good friend Gemini on here. Now, I'm going to pause the stream. I don't want to give away my API keys out to everyone here. So, bear with me. I'm going to go ahead and stop sharing. There we go. I've generated an API key on my other screen. While Angel, while you're doing the API key safely, just want to remind everyone watching that you've got a really cool opportunity to participate in the dev challenge brought to you by NAN and Bright Data. So, if this is the kind of thing that is fun and exciting and motivating and gets your gears turning on things that you can build with the Bright Data verified node, definitely head over to dev. There's the challenge which will be launching officially tomorrow and you'll be able to build some really cool stuff and win prizes and recognition and see what everyone else builds as well. Sweet. Good to know. All right, let's see if I can share my screen now. There we go. Okay, cool. All right, let's see if we can get it to run. I'm still It's still giving me issues. So, let's see. cancelled. Error. So, let's go ahead and refresh. So, I've got the Gemini model loaded. Let's go ahead and rerun here. And we've got 2. 5 flash selected. I think that's their latest model. Yeah, for LLM it is, I believe. Let's go ahead and try to rerun it. Of course, this is working perfectly yesterday. — It's always like that. — I know. Tell me about it. What I might do is I do have a recording of it. So, if it gives me trouble on this one, I'm just going to fire up the video and show you what it should look like. And I'll post on the description or the comments what the issue was. But I think it might be related to either my instance or something with my account, but it should not take this long. All right, there we go. All right, so let's go ahead and do that. While we're waiting, I just pull up this video here. Can ignore my pretty face there in the corner, but this is how it works. So, it fires through. We have our loop. Oops, went too far. There we go. and it gets sent in as an email. So, here we go. Here's our Reddit briefing and it's still showing like it's firing here, but it is done here. So, this is the output that I got from Open AI. So, we got our top stories, we got our community buzz, we got our quick hits

### [55:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=3300s) Segment 12 (55:00 - 60:00)

and I think if I scroll through it, yeah, quick hits here. So this helped me extract some ideas as to what are things that I'm missing when I'm going through the NADN subreddit. So it gives me kind of ideas on where to focus in the future as far as like marketing efforts and things like that. So the next step would be once if this ever finishes the output. We just click and drag it into the HTML message and we're good to go. And I would use some kind of daily digest email here. Install model. Yeah, actually I should do a local model for the next one. But I hope that at least this gives you an idea. And again, I think that this might be an issue with my local instance here that it's not wanting to play along. But this is the overall kind of flow that I would follow to build this out. And I'm just going to try one more time. Actually, let me see if I can do a fallback model. So, let's do this. And then there. And I think. So, here's timeout. So, we've got a minute for timeout. Let's do Should we reduce that to 30 seconds? There we go. Let's do one more run. I really want to show you guys the output here. See if we can get it to run this time. And if not, I will post what I found. Mr. G. Yeah. Share your Gemini on it. Trust with your API. We got you. Such wonderful commenters. I love it. Now, I know we're at time, everyone. So, I don't want to spend too much time on here, but I'm going to share this workflow on the video itself. So, it'll probably take a few days for me to package it up and get it on the thing. I'll also include a note as to why the AI agent didn't want to run and hopefully that'll help anyone else that is trying to build these out on your end. Does anyone have any questions before we start closing out? Raphael, Peter, any thoughts or comments on this build today? I think it's pretty clear. Everybody can repeat what was done here if they want to create something similar. I do just want to specify that bright is not only for Reddit. There's a lot of other domains that you guys can explore and you can build newsletters if that's what you're interested in from different sources. For example, one of the tools that we didn't even touch up on is web unlocker. It any HTML that you drop in there, it's going to extract the HTML. Then you can send that HTML to another open AI if you don't want to build a parser and extract the data from it. There's so much things you can do with bright data node involving publicly available data. whatever you guys can come up with. — That makes total sense. — Actually, that's why we're having this hackathon. We want to see what you guys are gonna come up with. — Yeah. — We want to see the ideas. That's the whole point here, right? We want to see what you guys can come up with, what you could do with the publicly available data, and what cool stuff you can create because we're interested in it. — That would be awesome. Where can we find the dev challenge? Do we have the link for that? — Yeah, I dropped it in the YouTube chat. didn't seem to show up. But Angel, if you go back to the dev homepage, you'll see it right there in the lefth hand sidebar on the dev homepage, dev challenges, and then upcoming, and it'll officially be live tomorrow, is the NA NAN and bright data challenge. The full instructions will be there. And yeah, I think that you all are going to really impress us with what you build and we can't wait to see what you come up with. — Excited for those prizes, man. You got some good cash prizes, I think. Right. Yeah, big thanks to the sponsors of course and in Bright Data and yeah, it'll be exciting to see what everyone uses the verified node to come up with. — Awesome. All right, folks. That's it for us as far as time. It has been a pleasure working with you all today and just showing you my methodology and my flow. In future posts, I hope to explain how to manage the AI agents and kind of keep them in line. But I hope that you've learned something today and I look forward to doing this again with you guys. Thank you, Raphael. Thank you, Peter, for this opportunity. — Thanks, Angel. — Was amazing. — A pleasure. — Thank you, guys. — Take care, everyone. — Bye. — So, I wanted to take a moment to redeem myself because during the live demo, we faced an issue and I found out that OpenAI was having a outage. So, I think this is a great idea, a great time to show you the fallback model option to deal with situations like that. It actually ended up working when I ran it in fallback mode. It just took a little longer than I expected. So, what I want to do is I want to go ahead and add to the stream here just on my own to show you the rest of it so you have an idea of how everything works together. So, let's go ahead and clear execution. I'm going to go ahead and fire it off one time. Now, it may still fail in terms of Open AI because it's still the same time. So, let's see if it does. If it

### [1:00:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=3600s) Segment 13 (60:00 - 65:00)

does fail, it should fall back to Gemini, at which case it should work. So, let's give it just a moment for it to run. All right. So, as you can see, OpenAI failed again. So, this is an issue on OpenAI's side. There's some outage going on currently, but the system has automatically switched to Gemini. So, let's see if now we can get an output. And there we go. So, it worked. So, by having a fallback model, I ensured that if there is an outage, for example, in OpenAI, then another model can take over. So, let's take a look. I'm going to refresh my email inbox. Here we go. Daily digest automation brew. Excellent. This is exactly what I'm looking for. Top stories. Excellent. We have our op votes, our comments. I can actually click here and go to the Reddit thread and excellent. So, this is what I was looking for. Unfortunately, sometimes there are outages on thirdparty providers like OpenAI side. The nice thing about NAND is that you don't have to rely on one provider. If something fails, you can go with a different model. So what we've done is I've gone ahead and mapped the input here. So the I've clicked and dragged output into the email HTML output. I this is an issue right here with Gemini's model. Sometimes it'll generate this little kind of comment at the beginning of it just to designate what the output is. It's very difficult to not get it to do that. This does not happen with Open AI. Open AI does a much better job of it when they're available. So that's pretty much it. So the next step might be to post this to dev. 2. So what we might want to do is post this to the stream using their API. So I've generated an API key in order to do let's see if we can do that as well. So I'm just going to go ahead and let's pull up the API documentation. dev. 2 API documentation. Here we go. Excellent. So here we can fetch articles. What we want to do is create or publish new article. So actually this is not the documentation. There we go. No, this isn't it either. All right. Oh, that was it. Okay. Here we go. So here is the documentation. So we want to publish an article. So we can actually make this really easy. So, what I can do is copy the curl in here. Let's see if they have curl as an example. They don't. So, that's okay. Here's the schema. Title body. Excellent. So, it's a post. So, let's go ahead and copy that. And go back to NAN. And to avoid having to regenerate this data, I'm going to go ahead and pin it. That way, I don't have to rerun this. It'll keep the output the same. And then I'll go ahead and do an HTTP request. And what we're going to do, we're going to do a post. And what is we're going to send a body. We're going to use JSON. And I'm just going to paste that example. There we go. So we're going to go ahead and post this as output. So this is technically a string. So it wants markdown. What I could do is have the AI agent output two different things. In this case, we might actually be able to use the markdown. There we go. So, I've split it off and run running them parallel. What I'm going to do is I'm going to give it HTML. I'm going to do markdown. There we go. That will ensure that it's in the right format. Let's go ahead and run it up to here. Excellent. And so we've gone ahead and converted it. We'll give that a moment to load. While that's loading, I'm going to go ahead and copy the API URL. Let's see here. So, post articles. So, we need to get the actual URL for their API. So, to do that, let's see if we can get it from here. So, let's go up to publish article. Let's go to p. Let's see here. I know it's slash articles. Ah, there we go. This is what I'm looking for. Very cool. All right. So, I'm going to go ahead and paste this. We're going to put we're going to call this publish to dev. 2. And authentication, we're going to use generic. We're going to use custom and I've already loaded my API key there. So, we're going to I did notice from the documentation we do need a content type.

### [1:05:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=3900s) Segment 14 (65:00 - 70:00)

So, we're going to do a header. capital content type and then we're going to paste application JSON there just to let the server know what it's getting. Now, here is our markdown content. So I'm going to delete string and do markdown and I'm just going to copy it here. We're going to do published true. We have our title. So for this we'll do daily reddit daily digest n summary for and then we'll do let's do dollar sign now let's format it in something pretty. There we go. I like it the other way around. So, let's do month, day, and year. There we go. For 8:19 2025. Perfect. And let's make sure. I'm worried that this new line here is going to break it and may not allow it to go through. So, let's see here. Let's make sure. Oh, we have a bunch of extra characters here. So, let's see if this works. If it does not work, we'll apply some formatting, some reax here to clean it up. Let's make sure everything else looks okay. series. We'll go ahead and remove these main image string. I think these are not required. Let's see here. I don't think these are required. Let's find out in real time. Description a daily digest of NAD Reddit posts for today. And for tags, I'll just do NAN. I'll get rid of the organization ID. Let's see if that works. And that looks good. Let's go ahead and hit save. clear. And let's go ahead and try to run and see if we get an error. Oh, we did get an error. So, the issue is that new line spacing. I just know that because I've worked with JSON enough to know that this is not allowed. These new lines right here break the formatting. So, what we'll do is we'll pass this along to chat GPT. Can we use rejax to clean up the markdown output of this expression? The outputs need to be valid JSON and I think there are some new lines that are breaking it. All right, let's let it think for just a moment. Ah, this is even better. All right, let's try this. So, this should actually Yes, that did fix it. Excellent. Very nice. Let's try this again. So, save and run. Oh, we're still running into an issue here. So, let's pass in this example. Great. That helped. But I'm still getting an error regarding valid JSON. Here's the new output. All right. So, here's the fix that it's given me. Let's go ahead and try it. So, here. There we go. Although, I'm getting the new lines again. So, I don't think that's going to work. undo it before sending. Let's try this one instead. Oh no, that gives us null. Let's try it and see if this is any better. I think it was saying that I was escaping it here. Yep, valid expression. Still incorrect. And let's try this. Let's give it the expressions first. So let's go into the markdown global escape pattern and here I think we can unescape treat as blocks. Let's see here. So what I would do is see publish I believe it's already double string it's actually let's see I think it might be easier just give it the nodes themsel here is what is happening on the node level. I think the escaping

### [1:10:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=4200s) Segment 15 (70:00 - 75:00)

or the stringification is happening in the mark down conversion node. It's one of the benefits of using NADN is that you can just give it the raw JSON and have it have a better understanding of what you're trying to do, which with that context can help make things a little bit easier. So let's take a look. All right, it's giving me a suggestion. Let's go ahead and try it. So it's actually that will not work unless we do JSON string. There we go. Let's get rid of the equals. Let's try this again. Hey, that actually worked. Excellent. So now if we go to the dev. 2 website, let's see if we can see my posts. So let's go ahead and find my profile. There it is. Excellent. It doesn't look great. It needs a little bit more formatting, I think, but otherwise it worked and we managed to get it on dev. 2. So, I would definitely apply a little bit more formatting and maybe strip out this using reax if possible, but other than that, I'd call this a success. So, once the formatting is cleaned up, then it's something that we can then we can make better. So, let's try to clean up one more time. That worked. Yeah, we were successful in outputting to the API. However, the formatting is a bit rough, mostly since the new lines are gone or not working. Here is an example of what that output. So, let's give it an example. Let's give it some context and let's see if we can get a better formatted output. And in my post, do we have all those lines? No, we don't. Thankfully, it looks like it stripped them. So, that's good. Let's see if we have an answer. All right, let's see if this works. Nope, that did not work. that output only the following. All right, let's try this. It looks like something is breaking the JavaScript. See if we can fix that. There we go. That's better. Now we're getting nothing. Let's go ahead and delete this. There we go. Okay, let's try this. That looks better. So, we'll go ahead and rerun it. Success. Excellent. Let's go back to my profile. There we go. Is this the one or do we have a new one? Let's go to the URL. Still the same one. Interesting. Still getting Okay, that worked. But the output is still formatted strangely without new lines. Here's the format. Let's go ahead and go back into my dashboard. Let's go ahead and delete. There we go. No post published. still generating a response. Let's see if this fixes it. Sometimes it's just about having the persistence to get it to work. Let's see. So, we're just going to drop this in. Backspace out. All right. Let's see if this is any better. Oh, it doesn't like my title. Okay. So, let's go ahead and change it. All right. Let's try a slightly tweaked one. Now, let's run it again. Cool. It accepted that. There it is. Ah, success.

### [1:15:00](https://www.youtube.com/watch?v=SoDip-tvZ0Y&t=4500s) Segment 16 (75:00 - 75:00)

Success. Excellent. So, here we go. This is a lot better. So, we have a delineation between the sections and the links. It's not perfect, but for now, I'll take it. So, I hope that this has been helpful. Just to reiterate what we did today. So, we now can turn on the scheduled trigger. We can run this every day, maybe at midnight. This will generate both an email and it'll publish it to dev. 2 to and allow us to automate the gathering of information from Reddit and having it in a nice summarized format. I hope this has been helpful. I hope that you'll use this workflow. I'm going to attach it to the video when it's been uploaded to our template gallery. And feel free, I'd love to see what you end up building with this. Thanks and have a great one.

---
*Источник: https://ekstraktznaniy.ru/video/15311*