# Community Hangout March 7, 2024: Introduction to AI & Community Demos

## Метаданные

- **Канал:** n8n
- **YouTube:** https://www.youtube.com/watch?v=eZacuxrhCuo
- **Дата:** 11.03.2024
- **Длительность:** 57:43
- **Просмотры:** 2,639
- **Источник:** https://ekstraktznaniy.ru/video/15682

## Описание

In our second hangout of 2024, we’re focusing on all things AI! n8n team member Oleg will give a high-level overview of different use cases, and the elements and tools of AI, and how to use them in n8n.

Next, we’ll have two contributions by the community: Oskar and Derek Cheung will give demos of workflows that involve AI in new and interesting ways.

Product Manager Niklas will also give us an update on the n8n Creator Hub.

We’ll wrap up with a round of questions and answers.

Chapters:

0:00 Welcome
0:27 Agenda
1:44 Niklas - n8n Creator Hub
9:14 Oleg - Introduction to AI: Web Search and Receipt Processing Using AI
28:21 Oskar - How to Structure Data with n8n Nodes
37:19 Derek - Automomous AI SEC 10K Analysis
47:59 Questions & Answers

## Транскрипт

### Welcome []

so welcome to the March Hangouts of the NN community and I am really excited to have so many people here and I'm also speakers here today so we have contributions from the N1 team as well as to rather well-known people from the N1 community and it's really great to see them in person here and have a conversation with everyone um and so what we will be doing

### Agenda [0:27]

today is we will start off with our product manager Nick who's going to be showing you uh the state of our creator Hub and explaining the concept of that then OLG from the N team is gonna give us an introduction to llms and show us a use case of how to build workflows with it and then we have Oscar and Derek who are going to be presenting their own projects here as well and I'm going to be like conversating with them so like if there's like obvious questions I'll just like have a chat with them and go forward and as I said if you have questions plop them in the chat and we will get back to you after the presentations um all right with that said Nick are you good to go I'm good to go yeah so I'll stop my screen share now and it's up to you I think we'll just need to see some examples when we start running some true press stuff through b to see what's required to sorry Matthew what's is that for us Matthew this field cannot I will I guess go it okay perfect can you see my screen yeah okay perfect yeah thanks for

### Niklas - n8n Creator Hub [1:44]

the introduction um part um as mentioned I'd like to talk about the Creator program which is a new program that we started um recently and I want to quickly introduce it to you and also show give you like a sneak peek of things that are still about to come um first off what is the creator program it's pretty pretty simple um it allows you to submit um your templates of the best automations that you guys have yeah we we'll mute everyone Nick please continue you're also muted Nick now okay B also muted me um okay so let's restart what is the creative program um it's pretty simple uh it allows you to submit templates um of the best use cases best automations that you guys have um your templates then will be part of our marketplace where everyone can see it so you gain a lot of Vis visibility currently around 20,000 users a month uh or plus um which will help you to for example show off your skills in netn um maybe you're working in the automation space and you want to show your expertise in netn or in Automation in general it will allow you to build your portfolio as well so you have a profile there um where users can see all your templates and it also allows you to earn money ultimately um via for example the affiliate program that is linked with the Creator program um which currently is around 420 per year on average um for every conversion um that you're creating with your template um first off uh let's talk about how to become a Creator it's actually not so difficult um you just visit creators. nn. um sign up there create a new account and from there on you can submit your templates um every template will be reviewed um so if it doesn't meet the quality bar yet we will go back come back to you give you tips on how to improve it um how to make it easier to use for users um of course with the goal to get every template um that is valuable published to the marketplace once your templates are published um you will have a few possibilities to actually gain visibility um with kind of the best one of them getting featured so that means that your template will then be uh visible in our feat section in the template um place and we will also start posting your templates on our socials as you can see here um I will show the featured section later on in what's about to come and here on the right you can also see an example um from Lucas um where we posted one of his amazing templates that we saw additionally uh you can also level up as a Creator um basically once you created some template that got approved to the marketplace you will um count as a verified Creator which will come with a verified batch that is visible everywhere um on your template on your profile and also in Discord and you will also get access to the exclusive Creator Discord Channel um to have a more direct link to us we of course very interested to hear what you think of the program um if you have still any pains with it if there are bugs um to have a more direct link to other creators uh but also to us um yeah to the net end product team then to earning money with templates um while earning Fame with templates is also very good um it's also nice to earn money with templates just building amazing automations with NN um and you can earn money in two ways um in the future we will have paid templates um so you can sell your templates um via external parties U but it will still be shown in NN and um also via affiliate I already mentioned this we actually share 50% um of the money that we make from any template um so basically once a new user signs up via your template we'll be sharing 50% of the revenue of this user for 12 months with you um as mentioned um on average that's 400 20 um well like 50% of that so 200 um 210 um and that counts for everything so it counts for when you share um your template in social media but also if you just create a template that performs very good on SEO um this will also get count uh get counted or if someone just comes to netn as a site your workflow is featured for example uh and converts the user to sign up with netn um this also counts for your affiliate money then perfect quickly showing um an Outlook of what's about to come so as I mentioned the program is still very new we're working on a lot of different things one of the biggest things is uh the redesign of our template uh library and I quickly wanted to give a sneak peek on how that's going to look like so this is uh going to be the new design of it so you can see that it's uh maybe some of you know the template page already um I think it's becoming more uh much more beautiful um we will have different sections where um your templates can shine um for example something like popular AI templates um your face will be very visible in here to help you gain visibility actually and um you will also be able to get featured here in one of these sections um to get even more visibility for your stuff and uh we will also highlight certain creators on this page then um yeah we already received quite a few very cool templates um but of course are very interested um to learn more about super cool templates about cool ideas um and also to hear feedback from you um because we're basically building this together with you um always trying to make the most out of this program for every Creator ultimately um so B you mentioned questions at the end um I'm not sure if we do questions now or at the end then um yeah let's do it at the end it's a little easier to organize for us I think okay then ignore this slide and back to B that's fine yeah um thanks Nick that was really great and it's really nice to see how this page is coming together and uh I hope we'll go live soon um great um with that we can switch to OLG and dive into the world of llms and

### Oleg - Introduction to AI: Web Search and Receipt Processing Using AI [9:14]

AI yeah hi everyone uh so I going to show you uh two workflows today as an introduction into Ai and na1 maybe to also give you some ideas uh for the templates you can build to get that template Fame uh the first one is going to be pretty simple it's going to be using just a single chain uh with that output barer and what it does is that it listens for an uh for a Gmail uh for receipts basically and for an image of a receipt and it converts those receipts or it logs those receipts in a Google sheet and then it sends an email as a reply that the receipt was processed so if we go ahead and send some receipts to my email send it and I'll go here I'm in the debugging mode uh so this workflow is not active so I start a listener so we can go from the beginning and you see that I received these receipts there's two of them uh we can see that this is uh this is the subject and go forward so first there is an if note that checks if there's some binary data it's not empty and also if the subject contains receipts uh if it does we go forward to extract those receipts to get them into an array so now we have two receipts in an array and finally we pass it to the LM to parse those reips so um the structures look like this or the prompt looks like this extract items and their prices from a receipt and a written Jason and here uh I'm sending the message with the message Type image binary and I'm setting the data field name as attachment and item index and keeping the IM image details to Auto now I also have a output parser connected which looks like this and what output parser allows you to do it allows you to provide like a Json schema to the model and then the model would uh ideally follow that Jason schema and during the response that schema or the response of the model is validated against that schema uh there's also autof fixing schema so if uh if your schema would be more complicated or if you would be using some less powerful model you could connect it to the out in fixing output parser which you can then connect to a model that is a smarter and to your original output barer and it will try to correct that schema let's just keep the structure output barer for now and uh finally I'm using the new cloudy 3 Model the Sonet which also has a support for uh image Vision or Vision uh let's execute it oh like while it's doing that do you need to provide the receipts in some specified formats or do you need to train it on your receipts or can you just send it any receipts that you want you can send any receipt that you want uh so we got some result results uh well letbe let me show you the receipts so this is the first one is for some close uh actually can see that it's in check so it's not even English and then second one is for some music instruments uh also this is not a good quality so it can really do well with the with a little uh let's take a look at the results so if you logs maybe we can see that it pars the second one uh there is a total there's a date shop name and the individual items so on this receipt there was just a single uh just a single item but on the other receipt from the CIO there were two items and uh one tashka which is a bag uh and it was able to correctly par all these three items that we can continue to map the items uh because we are interested in logging the individual items and right now we have two runs one per each receipts so we map them to items and once that is done we get an array that looks like this there's a schema so it follows that schema we asked it to follow and we populate our uh Google sh Google sheet with uh with these items so let's do that now okay and finally we would just reply to that message that uh this has been processed so if I would go oops I don't see because of this and it added it to the Google Sheets so this is just very bare example of how you could do the vision uh in na10 with the models that support it uh but the use cases of for this could be for example uh parsing an invoice or something else that is an image uh so in your experience how how accurate is it because your images as you say they were not high quality right they were just quick snapshots of some damaged uh looking paper so the output from the cloud3 model is really good so like they're very accurate they go very in depth describing uh what is on the picture uh image from gbt 4 Vision they are lately they don't feel so good so they are not so in depth so I definitely prefer CL for this task um I also Al played around with lava a bit which is like this open source model that allows you to do the vision and that also has a decent results for is not yet supported in na10 do you have an indication of cost for this operation uh that I'm not sure we don't I'm not sure how many tokens for that I could check I was just curious if is like is this 10 cents or a euro like well you're what are you paying for this it's going to be closer to 10 cent so the input tokens were 1,600 and the output tokens were just 130 and they charge for the Sonet model they charge what $3 per million so it's fraction of the dollar this is very impressive I know we Ed in my previous job we use services to do this for us and you just build it yourself that's really cool yeah three steps yeah okay so let's go to something more complicated so uh this one is going to be a web search and it's sort of inspired by perplexity where uh perplexity allows you to ask questions then it goes to VAP for you and it searches it does a research basically for you providing you with answers with aggregated answers for your question so how this works is that we have a chat trigger so we're going to chat with it then we have a chain that does the rephrasing because you might ask a question that might not be very well suitable for Google search so we just have this prompt to rephrase the original question in a Google friendly way uh we do again some output baring just to get it in the rephrase sentence format uh for this one we're using uh gp4 turbo preview uh but we could probably use uh anything for this simple step and then we're using the brave search to well maybe let me run it first uh what could be the question it's going to take a while because it's doing a lot of steps we can watch it on the canvas as it's working so uh we're using Brave search to search for those for that rephrased version still loading so you're first rewriting the question in order to get more or better results right yeah to focus more on like the specific keywords rather than past the sentence or two to the search engine which probably wouldn't produce such a good results that's trick yes so this model is very powerful but especially if you pass it a lot of context it can take a while so here would be the response here it mentions the sources I'm not going to read through all of it but I would assume they're decent and here are the individual uh websites where these uh where this info is coming from uh and here we can see in the log all the context that we passed to it that it made that answer from so if we go back how this is constructed uh we use Brave search to get uh to just search for the rephrase question in this case the rephrase question was beginner guitar learning tutorials uh so we search the web for it we then extract the relevant Fields meaning title and URL uh pass it forward to the HTTP request and we visit each of these URLs to get the HTML content uh content I also have this uh if here to check uh if there's an error because my HTTP request nodee is set up to always output data so even in cases where it fails because for example uh you might be doing this often and you might get blocked by Cloud flare or something like that uh or maybe the um maybe the page doesn't support the JavaScript or it's JavaScript only page so throw in some sort of error so we check if there's uh if has errors uh in this case it was all good there's not a single error so we pass it to the HTML node which extracts the content from the uh from the page so I'm setting a property name content here I'm getting the body because maybe to make this more visible so what I've get what I've got by fetching this page is the whole HTML of the page which has uh a lot of things that are not relevant for our model like you see scripts here could have links uh and so on so we first filter out all these selectors that we don't care about and we also do some trimming and clean up text so we are left with just the pure content and some links so on so this would leave us with the content we also check if it's a sensible content because as I mentioned if you would visit a page that is a JavaScript only it might show you like a warning that this page is not supported on in browsers that do not support JavaScript so we also check if the content of that page is greater than 400 it's pretty arbitrary number but seems to work uh and you see that in this one case it actually filter out one result because the content was empty uh now with all this content so not the whole content is going to be relevant for us so what we're going to do we are going to chunk this content and populate our Vector store so by chunking the content it means we split it using this recursive character text splitter by 1,000 characters with 100 characters overlap so the chunks would look like this so they are much smaller and then what we do is we embed these chunks so we convert them to an vector uh and we populate our Vector store in these chunks uh this is useful in the later step so show you just here uh right now I'm using just a in memory Vector store uh that is hosted on the server where you're running n8n and if you would restart n8n it would actually clear uh but for like more sophisticated examples you might use Big Pine Cone or quadrant some stores like that um interesting thing here is also that in the default data loader uh we're also populating some of the metadata so we're setting title and URL that we've uh gotten from some steps before in the past relevant fields and this is so we can later send this to the model and for it to be able to retrieve the individual sources uh so you can see here this is how the chunk would look like for this single page there's 20 chunks and there's nine runs because there's nine pages each one of them is going to have several chunks okay so now we have the vector store that is populated uh when the vector restore is populated we return all the items that it was populated with and that would mean that uh this next step would run 114 times which is not something we want at this point so I'm just limiting it here to a single item because uh we don't care about these items anyway so we're just fine with a single run and now we can do the vector store search so now we search for the users's original question how to learn guitar that is not the rephrased one but what actually user entered and we are retrieving the chunks so we're embedding this uh this prompt and we're retrieving the chunks from the vector store that are semantically closest to this question so if I switch to table here you see that I've got a document and I've got score uh how close uh it is to this to the prompt in this case it's 70% is pretty good but if we go uh to the end it's 59% actually which is still pretty good so we have some very relevant content here and um I'm setting the limit to 30 uh 30 chunks basically now in The Next Step uh we prepare these chunks doing some processing I'm using a code note for it but probably could also use a set note uh Cod note was the quicker for me so uh because I'm using the Cloudy 3 Model it uh it responds very well to like XML structure so I'm uh I'm wrapping these chunks in the XML structure so we see that we have a context chunk here it has a URL Source this is coming from the metadata that we populated earlier and it has the actual content we return it all and we get this context chunks and the original query and then finally we can ask the uh LM to provide us an answer so if I open this query you would see that given the user's query and the top results from similar search provide a detailed answer to the query using the information from relevant results if no relevant results are found respond with no relevant results found and there's some instructions how it should format the response so this would give us output like this which uh makes sense but it took a long time so you see that it took 32 seconds so we might want to play around with the different models I have some of them here so if I sorry we have about two three minutes left so I think that's just about right so if I disconnect cloudy here uh you might heard about the groc uh which is this super uh fast and uh inference model uh currently they only support two models uh llama 60b and mix uh so we're going to use mixol and you might noted that this is actually an open AI note but we're not going to use the open AI we're using drog u a lot of these providers for LM providers they have the same API interface as open AI so you can actually use the open AI node by just changing the base URL that's what we're doing here so we're changing the base URL to The api. Croc open AI V1 uh we're setting some temperature and maximum tokens and we're setting the model the mixture uh mixture model and now if you connect this and execute okay it's still not instant but you see that it was much quicker it was 7 Seconds compared to 35 uh the response how is it okay it's all right it repeats some of the sources which we've told it explicitly not to do so that's great but may that's not great but maybe it could be improved with some PR engineering um I think your first uh model returns some duplicate sources as well okay really yeah you're right yeah oh well so can't blame that one uh yeah here we also see the token tokens uh there were almost well 11,000 in the prompt th000 in the completion and yeah that's about it uh you can also play around with the some uh some open source models or some local models like AMA which I have running locally uh but I don't think we have time for that usually it takes a lot of time on my MacBook to run this for especially for such a large context yeah and that was it thank you very cool besides the AI stuff I learned a couple of nice tricks here in your workflow in your HTML extraction did a couple of things I didn't know about and limiting to one result is something I never thought about so very impressive thank you um with that please keep your questions for o uh for at the end of this hangout and we are switching to Oscar now yes hello everyone uh let me just share my screen all right could you please confirm that you see it y it looks great all right thanks so uh yeah uh I'm going

### Oskar - How to Structure Data with n8n Nodes [28:21]

to tell you a bit about data structuring with u with na10 I notes so this is something that basically o uh also covered in his short demo um but yeah let's dive into it and I will actually use the openi here so uh maybe it will be something uh that you find interesting so yeah uh basically I will try to cover it with the um like a use case in this example we are going to parse uh resum documents uh so here I have like the example document from this John do which is who is the software developer and we are going to extract some of those data which is education employment history and so on uh to the structured um structured form and then we are going to process it in um in an and workflow so what we are going to uh build actually here is like a very small and easy uh chat bot um that we will cover in uh telegram that will be our interface here uh we'll use um open AI for actually parsing structuring the data um we'll use of course n10 to connect it everything together and to process this data and at the last point we will actually generate a PDF and for this purpose I'm going to use uh Gutenberg uh if you're not familiar with Gutenberg it's a really cool software I um I also have on my YouTube channel a small tutorial about this so uh you're very welcome to um dive into it uh but what we are going to do is actually have this um telegram chatbot uh to upload this uh resume and in the output we also will receive the PDF but um what is the difference between the one that we uploaded and the second one that we actually received the um PDF that we received is uh fully made on this structured data so in this case it doesn't look uh quite impressive because it's like just a plain HTML converted into PDF but uh we can like select the data that we want to put into this document and um and actually convert it so this is one of the things that we can do we can of course uh add those add this data into some kind of database with our workflow but I will focus only here on um converting it into actually the other PDF which may seem to be a bit useless but uh it's only for the purpose of this presentation so if you receive resum in very different formats right and it all brings it back to one format yeah so yeah this is that was actually you know the idea because as far as I know there are some H HR like agencies companies that struggle with this EXA so yeah I I find it uh actually that was the inspiration here all right so here we have like this workflow that is behind it h it may seem a bit complex at first but actually it's quite easy the very first part uh on the left which are two conditional if noes um this is actually the authentication uh for the chatbot uh here I just want to uh mention that yeah it's the chatbot is only available for the users with specific uh chat ID so this is like the very simple version of authentication then we are going to extract the data uh with Native um node in na10 which is the extract from PDF um it's worth mentioning that this node extracts the data from readable PDF when it comes to the like things like scans recips and so on uh we need some kind of OCR here to connect but uh for purpose of this presentation I think it's uh totally okay to use it um then we'll have like this heart of this workflow which is the chain um with um open AI uh model and with parsing uh parsing subnodes I will go to this a bit later in the next slide uh finally we have the a bunch of code notes which actually only are responsible for formatting the document and making the whole structure and finally the generating of this PDF so when we uh run this workflow um we will have an pared output here uh out of this uh chain and finally it should be like generated uh in Gutenberg and delivered to the user so this is what is going behind here and let's focus right now on this um on this chain so first we are extracting the um the text from the resume so as you can see here we have the full text that we have from our document the thing is that this text is actually quite a mess right now so we need to make a structure of it so uh for this purpose we use uh this open AI chart model and this one has a very specific settings so at first I give it a prompt to uh extract the data and what is quite important here I also mentioned it in bold text uh we want to receive the UniFi Json format this is because we as a response mode um in options in this um in this subnote we want to use Json so um this model will do its best to return to Json and as far as I know it's available for now only in gp4 Turbo preview uh models uh with and of course we set the temperature to zero because we don't want the um model to be too creative at this point um so when we executed this chain we should receive the uh the structure of the document proposed by the model so uh it doesn't look good right now because this is totally stringified um output and what we need to do right now is simply put it through uh go through the uh schema Json schema this is also something that ol mentioned in uh in his demo so first here's an example we have uh example the name yes of uh of the person of this resume and um we need to create a Json schema for uh for this specific parameter here right so of course I'm not going to do it by uh by myself here so I simply copy the proposed um proposed by the model Json and ask chat GPT to create it for me because this is simply uh easier and faster and I can simply um operate on already U on code that is ready and edited by my S I don't need to write it from scratch so as you can see this is quite um extensive here uh we can limit edit it however we want uh to have the schema that fits our requirements in this for this purpose I'm going to copy it and the last step actually here is to put the um structured uh output parser uh which by default has the schema and I'm going to replace it with my schema so when I execute this workflow right now I should receive in the output not stringified version of this Json but very well pared uh par the data so as you can see right now we have very clear table with the data that we can do whatever we want with it we can um pass it to the database process it uh make a PDF and send back to the user the possibilities are here actually um very broad so um yes this is how this small project looks like I will try to uh prepare this template for a public uh view so uh thank you very much and highly invite you to subscribe to my channel on my YouTube uh I will try to upload something new very soon thanks BR Oscar and a wonderful presentation as well um I really enjoyed watching this and seeing you structure this data and I as you said I think this is a really relevant use case for people who receive very different types of information right so the power of AI and N here is pretty amazing um this was great thanks um we are getting a little short on time so I'm quickly switching to our last host today uh Derek Chung uh who is going to show us a very different type of use of AI yeah

### Derek - Automomous AI SEC 10K Analysis [37:19]

thanks B remind me not to go after uh o and Oscar just it's too good for presentations that uh hard to uh go after but I I'll try so um so um this is as Bart always saying this is a very slightly different use case and um this is really uh an investor Persona that is um that I'm going to show and I call it autonomous AI SEC 10K analysis so what first of all is an SEC 10K so for investors you know uh every company will have an annual report uh so this is uh a company that I invest in Nvidia uh so there's 169 pages of this PDF that really describes all their business and what they do um so this is really useful for uh value investors um and the use case I'm going to show is how to use ENT and as uh part of the investment process uh using uh also this idea of autonomous AI so so uh the use case is to automatically analyze the SEC 10K to get a depth of insight of a company uh so what's really cool about this use case is that um we're actually going to use the AI to figure out even what the right questions to ask right so um as part of uh getting that insight and the approach that we're going to use is kind of take two personas uh so one is um so so the mindset here is that think of it that you've got a virtual uh Team of of U uh crew members that U perform different things so for example you have on your team a senior research analyst uh who uncovers insights into Nvidia and you also have a tech content strategy the person who then is able to articulate that information after the analyst is uh is is um finished analyzing this SEC 10K but here's the architecture here right so I'm it's somewhat similar to what olle was showing earlier with respect to the vector store so um you know uh I'm using ent10 uh to um and the Lang chain support to uh get this Vector store which I've populated it with um the SEC 10K information and then uh this is going to be uh used uh to kind of like um a web hook uh type of approach where my crew of AI agents will call into na10 and uh this will serve as uh a way to answer the questions all right so uh I'll just give you ahead of time here's an example of the result that it generates just kind of give you a feeling on where we're going with this so this is actual output uh you know o was showing in his uh present you know you're trying to look at does the model actually produce good output right you know the cloud models are really good but I'm using a actually for this one actually a much cheaper model right it's actually the mix model uh that uh is actually 27 cents per million uh per million tokens right so it's very inexpensive uh model and look at the results that you can get right so all with SEC 10K information you know you actually get this depth of reporting that as actually as an Nvidia investor I also learned something new uh from from working through this so let me show you first um uh the um my andand workflow right so this workflow uh I mean it's not a very complicated workflow and it's uh um actually quite simple right so there's an up upserting part here where you chunk the data and put it in this case I'm using a vector store a super superbase Vector store um I know o was showing uh the a memory one um so this one will persist and so all I'm doing here is using a retrievable model I did something kind of cool there but I um I I'll skip that for now uh but the idea here is that I use this for the Q& A for my SEC so I up upload it I upload the SEC 10K information from here and then I now have this Q& A that I can do this so all right so here is then the thing that drives it so I am in this um uh I call it it's a riplet so uh this is python code very simple python code that uh I select what the model I'm using so this is a very inexpensive uh very inexpensive model um and then I tell it you know here's the uh different personas different agents and the key thing here is the tools right so this is like I tell it here the SEC tools so I've specified the tool and this tool is simple call to the webbook okay so I'm going to uh hook this up and have this senior research a agent uh call this uh when it needs to and then the tech content strategist uh they're going to uh summarize uh the information from the researcher and then here's the task right the first task is going to be done by the um by the researcher and they're going to conduct a comprehensive analysis and I've given it some starting points all right and what what's cool about this is that it gives some there's some starting points but the AI is able to kind of fill in the blanks in terms of like what questions to ask and so I've G on the right hand side is actually a result of the run so I I'll just run it right now just to show you uh what to expect okay so what this will do is I'll start up uh this task here so I'm living a little bit dangerously I'm doing live demo uh recording may be a better approach but okay so you can see here um that uh it's running right and and it's asking you know so the AI is actually figuring out what question to ask you know what are the uh primary business models of Nvidia right and then it's going to go and uh go to na10 and yeah so it's actually getting this response from the Ann workflow I can kind of show you that it's real right so see it's running right now and it's calling in it's calling it and you can actually see the answers from here right so it's actually getting that information from inan and theas and and it will run through and uh so it it'll generate the different kinds of questions and uh you know kind of get the results based on on um you know doing the question answer and then summarize the report but you know it's really cool because it's uh you know generating this uh these questions and formulating uh getting the answers back from uh the vector store and then from there it's going to um it's going to put the answers back together so that's kind of the gist of this workflow and at the end of this it takes maybe like two or three minutes to run and um and you can see you know that uh you know it's uh right so at the end it's going to produce something like this I mean there's a little bit of uh Randomness in terms of sometimes it just large larger context or larger output and sometimes uh less but um it's generally pretty good in terms of calling the U models and what not or calling the tools and whatnot but I guess what the other thing I wanted to quickly show is that one of the cool things I really like about this use case uh with autonomous Ai and and0 is that uh you can like the tools using connecting uh the the uh the flows into the tools right the the crew into tools is that you know it opens up uh all the sets of tools all the workflows that are and templates that are available in N1 and all these agents now would have access to all these different tools as they do different uh use cases right so I think it's super powerful right when you now have an agent that is uh souped up with all kinds of tools that are powered by endtoend um so anyway that that's a short uh summary of the use case an example W Derek I think that's the first time I've seen someone have an and figure out the questions or have ai and then answer them as well that that's is really interesting approach um and I didn't know about crew AI yet so that's really interesting to see too um one thing I want to give you a feedback immediately we saw your uh web hook URL for a short bit so you may want to like disable it or change it to make sure oh yeah it thank you um brilliant all right um so thanks again um we are running a little out of time but I think we can go over a bit uh for everyone who wants to stick around I'm going to go back to sharing my screen and sharing the questions that we have from you guys and I hope it actually works maybe not um so I'll just start reading

### Questions & Answers [47:59]

a question here the first one is about uh this for Nick uh the question is are the templates uh templates is the one pay only or can be paid with recurrency um I'm not sure Yuri maybe you can expand a bit on that question are you asking if you can be paid for your templates work or no Nick does it make sense to you um yeah I think I also responded in the threat a little bit so basically the option of paid templates will be a onetime purchase only most likely um but Affiliates like whenever you earn a conversion from your affiliate you will get that money for 12 months that makes sense yeah all right thanks um then question by Kevin he asked does anyone currently use n8n in an Enterprise setting I know a lot of people do but maybe in this group here uh we currently use a different integration platform but it's limited and wondering if N1 is Enterprise grade um maybe that's a question to take offline so if Louis you could collect Kevin's contact information we can have a chat about that separately yeah for sure I share that you can visit n. Enterprise to get to know more or speak to our sales teams but yeah we can also chat offline yeah um if our question is interesting can we also work with video and AI now with a new Sora model it's not public yet but are there any services that we can use for video I think like I'm not sure if Runway has a API access I haven't dwell in this direction much uh one cool example I saw somebody using GPD for vision was just uh getting a few frames from a video like per second you don't need like all 60 frame or all 20 frames if you get like one frame then it might be good enough to try to describe what's happening okay that's interesting there's also like another direction I've worked before with an API based video editing service where you could just send scenes and uh cut list and everything and have it return a customized video for you so it's not AI based but it's still very powerful for like personalized marketing purposes Etc so really easy to use um another question from Yuri how are the models defined I mean what if I want a custom model or another missile provider such as grock you kind of touched on that already r m so Gro was what I was using in that demo you would just overwrite the base URL and if the provider supports the openi API interface uh it should just work yeah you just need to update also the model and it should uh should work yeah but this only works for services that use the open AI API right formats yeah for fully custom models you would uh use maybe Ama or llama AI which again supports this open AI uh interface thank you or or there was a question actually in chat about the hugging phas and we do have a support for hugging phas in inference and points uh so you can host whatever model you want and just provide it with the endpoint and the authentication and then you can host yeah whatever you want on how g face that is thanks um another question can anate and parse complex Json responses I work with some apis that return complex structures and can't use Query parameters to simplify it also can I can par Jason based on conditions navigate this array if the name is X um not sure who the question is for but maybe ol you can U yeah so we can work with any Jon uh one strong points of an is are the Expressions so you can do any JavaScript inside the Expressions to map over objects to do some uh conditioning logic uh and if you really need some heavy stuff you can just use the code note which again allows you to basically have access to JavaScript and n8n yeah the flexibility of n8n with the conditions and the differ like flows uh would make a lot of it pretty easy I think um next question for ox demo how are you parsing the content inside the XML tags from CLA uh so there wasn't any parsing going on I've just uh so I had these chunks and I had the URL for this chunk for these chunks and I just used string interpolation to like populate the XML TX to pass it to the cloud um yeah right um Yuri asks any idea how I can save the history of the chat and bring it back later with the same user suppose you just get that data right so you can store it anywhere you like or uh you would need to use memory me memory no so for example agents they have access to that or if you want to implement it on your own there's a memory manager node which allows you to also interact with memory and then maybe pass it to the chain and if you want to have it for the users of n10 for example we have this very handy setting in the chat trigger which allows you to create a web hook uh or embed your chat and then you can give access to it to the same uh people who have access to your n10 instance all right thanks let's see if there's more gav again Gavin has a lot of questions that's great does NN have advanced error handling workflows for example if I set the number of retri that an IPI fails or Rouse it to a different step if step fails um yeah I think we also answer that with chat with screenshots but uh we allow you to do both like have some workflow or for example if you have U uh if an error happens on the note and you want to handle it on the Note level uh there's a setting u in the node setting uh that you can have like a separate Branch for the errors so instead of just having like the one output from the note you would have success output and error output and if Kevin if you want to learn more about error handling we have two courses on our documentation sites and the second and course goes into error handling as well so you can see how that works and the error handlers are just n workflows themselves as well so you can make them very complex and uh have them perform any task you need um Jimmy asks doeses n workflows always output data I developed a workflow as depicted I did not see the screenshot here but maybe you guys did um but it has only been able to send a few submissions to Google Sheets with lots of failed execution um does that did you happen to see that uh workflow all like or uh no I missed that one okay so that's kind of hard I saw the screenshot and uh apparently the error is that Google Sheets was not available so there's like an error so basically whenever that happens um of course if Google Sheets or the API for Google Sheets is not available um your executions will fail obviously uh a thing with these Services is that you need to be mindful of rate limits I'm not sure that was the case here but if you Hammer them with requests they will deny you for a bit so it usually Mak sense to read their documentations on rate limits um and uh add a weight Noe in your Loop to reduce that ofit um all right there's a new question coming in live from Neo um there have been many concerns about Lang Chain's bad abstraction and La lack of robustness does n have plans for AI nodes Beyond Lang chain such as Hy stack semantic kernel Lang roid Etc nothing uh is plan at this point uh but we're trying to make the L chain work for na10 rather than the other way around so hopefully we can improve some of these all right thanks and I think that is the last question yeah all right um so um after this I will disable the presentation and we can just chat but this was most the like the formal part um thanks for joining us uh we had over 110 people here at some point which is really impressive um and uh I'm really glad you took the time to learn about n and our uh AI support um we're going to do another hangout in early April if you have an idea of a topic that you would like to know more about or if speak here please feel free to reach out to me at bn. and I'll work with you to uh to get this set up so thanks for all our speakers you guys have been great this has been super interesting um I'm going to disable the recording now and then we can have our informal uh chats so thanks
