# AI LangChain Workshop: Audio Log

## Метаданные

- **Канал:** n8n
- **YouTube:** https://www.youtube.com/watch?v=j3YDPMQkY_0
- **Дата:** 03.11.2023
- **Длительность:** 59:20
- **Просмотры:** 2,053
- **Источник:** https://ekstraktznaniy.ru/video/15692

## Описание

The full audio recording from the AI LangChain workshop which took place October 13th (edited to remove pauses).

Be sure to check out https://n8n.io/langchain for more details

Version with video:
https://www.youtube.com/watch?v=6-4yruDaXdA

## Транскрипт

### Segment 1 (00:00 - 05:00) []

for the benefit of people who are listening after this has been recorded and obviously they might not have the visuals you can always access the live demo visuals from the video so if you're listening on the audio we'll try our best to describe what we're doing for your benefit but it would probably be better if you really wanted to see what we're doing here to have a look at the highlight video on our YouTube channel so everyone welcome uh this is our first ever virtual community workshop and I am very excited to be here hosting that for you as your new community manager uh and for those of you who don't know this is Oleg uh o would you like to introduce yourself yeah hi thank you uh I'm o I've been with an for over a year now working on the frontend mainly but for the past few weeks and months I've been focusing on this AI Lang chain integration uh learning more about Lang chain myself so it's been a cool Journey yeah I've I may not have been with the company for very long uh but I have definitely learned quite a lot in that period of time I think uh my ability to absorb so much information has been pushed to its limits uh and yeah there was quite a lot to learn and I'm inspired by every step of the way and I know and the team have been doing some impressive stuff uh and specifically the most impressive thing that has come out recently is AI Lang chain so let's get into having a little bit of discussion about that for those of you who don't know what a lang chain is what n has done is create a series of features and workflow nodes that can make this more accessible to you um you most people will probably be familiar what AI is more specifically you have ai chat Bots and AI sources that can actually process information for you and use it in interpret it into a in a way that's usable but o would you like to kind of dive a little deeper into what we have been working on here any yeah so uh we heard voices from the community about U integrating the Lang chain that people are and we've seen people using lank chain uh a lot to build AI applications and because uh we saw that it integrate it would integrate so well with all the data streams and U options that you have with anent with how you can work with your data get your data into and it then process it and then pass it into the these AI workflows uh we started building this uh sets of notes and generally the features around it to be able to create these AI workflows yeah so you can uh there's several ways for you to get data in n10 so I mentioned we have slack email uh from getting it from web hook or just requesting it from somewhere and then you can wrap it all up pass it to the agent or chain and do some uh decisions on top of it and I know from playing with it myself I know many of the people here watching and listening they've been involved in the Alpha and the so the feedback that you've been providing us has been brilliant it's really helped shape the direction that we've gone with everything but I know from experimenting myself full disclosure my my code coding and programming capability and my technical knowledge isn't highend but it doesn't need to be with this from what I've seen I can get to grips with this very easily and I can see the full potential of what I can use AI Lang chain for and specifically these nodes and I mentioned just before here I'm working on a database with my own information my own dat that I've made on vegetables of all things and I found that being able to just sync that in or giving an AI and a language model access to that is really beneficial like having total control or more control over the data that's available to this the AI Lang chain is has really shaped and improved my workflows and I think there's a lot of like massive potential when it comes to having your own personal data isn't there 100% it's super interesting how you can use these language models as sort of a back end and let it make decisions like in your app that you would otherwise have to program for so like you don't have to handle every single use case you can just give the language model an input and it will be able to figure out based on your prompt what it what you want it to do and give you uh output that makes sense for it uh yeah this is definitely where your data plays a role uh it also decreases the amount of hallucinations this model makes if you give it some data and you ask it to only work on top of that data or to only provide you answer on top of that data not a good aspect and you mentioned that you can start very quickly and we're going to see it in a bit I think where you can literally go into the templates click on the template provide your credentials and that's said you have a chat Bo that you can connect your adult to exactly and it's funny you mentioned the templates for those of you who haven't had the time to read over what we're planning for the workshop today um after we've had this little chat we're

### Segment 2 (05:00 - 10:00) [5:00]

going to go straight into the live demos uh then we'll have a QA session and then later on before just before the end we have a special announcement that involves the templates uh so I'm personally really excited I don't think anybody would want to miss that I promise you really don't the templates from an a complete novice or an intermediate those are really beneficial as a good starting point when it specifically when it comes to AI Lang chain I think we've created a number of AI Lang CH specific templates for people to just start edit or integrate it into an existing um workflow and they're brilliant starting points regardless of whether it's AI Lang chain or anything I've seen a few of them come from the community as well I've seen quite a few use cases from the community that have been so inspir inspirational that for sure we have 100% said that's great not only is that person using it in the way that we intended every only should have access to the way that they're using it it's perfect so templates have definitely come from you guys in the community I think one of the other things that I've it really hard to talk about it without showing everybody straight away I'll probably pick up on it when we actually go through the demo but one of the things that uh I know there's been a lot of hype around is not only can you make uh like the most common one is the chat bot um you can get it to process data and reformat it and it slots so well into existing workflows so theoretically if you had something that you have running for spent ages running something or you created a workflow on this multi-level workflow that you might want to help miss a few steps or use AI to jump a few steps it does help you can slot it in and you can adapt existing work workflows quite easily from what I've seen I know that you've had a few that you've been tinkering haven't you oh yeah and as you said you could easily integrate this AI into those workflows but you can also use those workflows in your agent and sort of create yourself this little assistant of whatever workflows you already have with and then giving an option to be called from this agent it's really accessible too in terms of the amount of Integrations we already had at an that's now just been almost given superpowers almost of how enhanced the collaborative energy or the cality effort between these two different um nodes really has like I've seen some of the stuff that people have been doing with the slack I've seen some of the stuff that they've been doing with their internal Bots and the again really hard to talk about it without showing you the nodes but I'm I think I'm just eager to show you the demo it is uh a little bit uh too impressive to just talk about I think maybe because I know how many that we how many workflows we've got on how many demos we have and specifically because I know you have questions as well maybe we should just kick off with the workflow demos now I like and what would be your first one all righty uh so we're going to create three workflows today one of them is going to be sort of a main workflow which will have an agent that will be communicating with via chat it's going to have a memory and then we're going to create two workflows to give this agent some uh tools so some things that it can do except for answering the questions um yeah let's get right into it because as you said it might take some time uh I'm here on the new template uh but that's actually not where I'm going to start uh I'm going to go into templates and click on the advanced AI as you mentioned there's a bunch of templates that you can use for whatever use case you might have because far as a building for your workflow but I'm specifically interested in the slack chat boxer by AI click on the use workflow and it gives me these notes already set up it gives me the stickies that explain what these notes do uh I just need to create set up the credentials so let me do that now first one is the chat open AI what I've done I've opened which is the language model node that connects uh to an agent and I'm going to set up the credentials first so I'll set the open AI credentials I select my model which is gbd 3. 5 turbo 16k in this case and I'll just leave the default sampling temperature for now I also need to set up the web hook to listen for the select messages and what I've already done is I've set up the credentials in slack so when you're setting up a new hook in slack you would have to go through this setting up step where you basically respond to a FB hook challenge you respond back with a challenge it's very simple to do but I just uh do not lose time now so I've already done that so you see that I have just to say at the top there you see there's a docs if you're unfamiliar with na10 every step of the way if you're ever confused about anything for example how to set up your credentials you can click on the docs at any point in any node and it will be able to guide you I know our internal document creat Deborah has done

### Segment 3 (10:00 - 15:00) [10:00]

a very for and excellent job of really being able to make this accessible to everybody regardless of your code ability so little tid bit yeah uh so to explain what I'm doing here in the v hook we're just waiting on this address uh for a call and that call is going to come from slack so my path is Gil foil minus web hook because that's the name of the bot that I'm using for testing and this is the URL if you're not familiar with n8n uh you have two uh URLs in the web hook node you have test URL which you would use when you're uh debugging your workflow so you would click on the listen for test event now you can see that it's the web hook test URL that is getting used uh let's fire an event to that all right it wouldn't get the uh get event because it's only set for post events here uh but trust me it works we'll see it in a bit uh so this is the test URL and then once you set your workflow to active here your app or the slack would have to use the production URL that you can see here let's stick with the test URL because that's what we have set up so we have the V hook ready uh we can test that it's actually doing something by sending a message to gilo here where did it go all right we go hello and you see that the execution or the listening stopped we got an output here and we got the data about the message uh some headers uh this is the body which contains the um message inside the see here text hello and it contains bunch of other information that we're going to use throughout the workf flow to build this SLB next step is we filtered out the non-user messages because this uh slack with fire a vbook request also for your Bot so we have to make sure that uh we don't process these messages uh because we only care about what user says to the agent we don't want it to talk back to itself then we have our agent it's a bit spicy so it we tell it that it's in Gil foil from Silicon W TV show amplify your bluntness and cynicism dollar ratings hero incompetence B yeah you can giving it a ni it's a bit aggressive okay yeah that's um it's always a useful tool to be able to set the attitude of your 100% and it shows you the capabilities of these models like what they can do they can act as other things just before we go on just want to kind of highlight that this is the first time from what I've seen we've had a node that you feed into that node specifically like for example everything is usually Square nodes where this particular a AI agent you can feed different things into it to make it more essentially more effective um well I'll let you continue but I wanted to just highlight it's completely different to what you've seen before um and there's a reason for it you're right um so these connections that are coming here from the bottom and these are what we call uh configurational ndes uh so they provide some functionality to like these this root node uh so we see that this root node agent is utilizing model that one is required so we connected chat open AI uh and with a conversational uh agent you can only use chat models so you see that uh when I click here on the model I click on the post connection uh here there only chat models which makes sense because it's because you're going to be exactly yeah then we have a memory which is not required so your agent put on completely fine without it but it just wouldn't be able to uh remember things and we'll go into the setup of the memory in a bit uh we have some tools that we give to the agent that it can use here we have Vicky and ser API we won't be needing that so just keep wiy for now and uh finally there's an out parser that allows you to provide it as uh either pars it in list or provide it with the schema or Json schema to what your expect your Json output to be of uh this node but this is fine for let's check how the memory is set up because that's an important aspect to how this SL Bo is going to work so if I open the memory this is the configuration for it and we can click here on mapping so uh we have two modes in the input we have mapping and debugging would show you the current execution input to this note while with mapping you can map execution data from the previous notes sort of from the parent of your root note so we check the VB hook note and we are going to set a session key so what how memory we're using window buffer memory which stores your conversation based on the session key that you provide the session key is includes the workflow ID and it includes whatever you write here so we want this agent to be able to remember our user so

### Segment 4 (15:00 - 20:00) [15:00]

we're going to keate the session based on the user ID or we could also use the threat ID if we would just want it to have a memory within a threat or we can do combinations of any of those or any other attributes uh but let's use user for now so we see that the slack uh response uh this is the V hook here it gives us the user which is what we're going to use so let me just get rid of this to make it empty uh you see here that I click to expression to be able to write the expressions and I'll just drag it from the left panel user to here we get undefined because we haven't executed it yet but uh it's going to be there we also configure here the window uh context window length which is how many messages in history it's going to store or it's going to remember so that's done just to stay on there a little bit um just to clarify that for those of you who might not necessarily be fully aware of what we're going through so essentially the memory is giving the agent the ability to recall what was previously said or previously discussed based on how many messages you determine if you didn't connect the memory it wouldn't for example if you asked what is my name and then you asked hey again what is my name it wouldn't remember or if you can do it without a memory but as well if it's not I wouldn't do it but as well as that you don't have to do windows buffer memory you have you can attach any um we have a number of options uh so it doesn't have to be Windows buffer memory uh but also if you do have Windows buff of memory it does mean that if the server restarts everything it won't remember uh so keep that in mind yeah so that's memory setup we also have Wikipedia we give it Wikipedia as one of the tools that it can use for now in a bit we're going to add it more tools but let's just keep aipia for now for testing and uh then we have a slack note that is going to respond the uh to the message so let's see how that is set up again I need to set up my credential and here we can see that it's uh it's getting the user we need to fill in which user is going to send the message to so we do it by selecting send user by ID here again we need to provide it the ID which we can drag uh same as in the previous step so that would be this thing drag it in we got it and uh we want to give it a message so the message M would be the output of an agent and that's said we just basically changed the credentials to set it up for our SL credentials and now it should uh work if we go execute workflow that will start the Feb hook test listener so now I should be ready to say hello and you see that it triggered the event and it provided the response here very polite now it's let's uh right well this is important to test so now if I would say something again nothing would happen because that web Hook is not listening anymore because in the test mode like when you click on the execute workflow and you're not you don't have it active uh it's only listen once then it's going to terminate The Listener so you would have to click on the execute workflow again uh to get any response okay so that seems to be working I think as well it' be important now to show the debug information from the AI so that you because that's I think that's something that's really cool that we've implemented here that having the ability to see the AI debugging is incredible that's a good reminder to show you the AI debugging so uh i' I've opened this agent uh agent note and here it has two tabs it has an output which is what we are sending to the slack to send back to the user so here in the here we're using a Chas output and you see that these are now green so you are able to see what exactly they result to so here we have our message and the user ID and in the agent we also have logs so when I click on logs you can see what specific well you can see logs for these configuration note and what were their input and output so it starts with the window buffer memory where uh we in inserted uh new message hello how are you so that was the initial message and we see there's no output so there's no messages uh in that memory and that was passed to the chat uh open AI so to the language model here we can see again the input here is the system prom that it gives uh to explain the agent what it is and what it can do here at the bottom you see the user's input hello how are you and this is a simplified view so we are doing some parsing on that output to show you this view but you can also see it as a raw Json of what exactly is happening so this would be the object that we uh that we get you can view the log as well uh so theor ically if you wanted to test whether your setup

### Segment 5 (20:00 - 25:00) [20:00]

is correct you could temporarily add a manual chat uh directly in the canvas 100% let's do that quickly but for that we'll have to disconnect the slack node or just disable it for now because it won't know where to set SL messages to U let add the chat manual trigger is what it's called I connected to my agent the only thing is that I will have to change the text so we just do that copy it from here remove it so I'll just change the expression to use the Json do input for the text because that's what the chat is going to uh send uh now that I have a chat connected oh I also need to change the window memory uh session key because we no longer have this data as we're going to be doing the debugging have I just der I'm so sorry it's all fine I'll just copy it it'll be super quick and now we can open the chat uh so we say hello oh and it's it executed the workflow oops here it's waiting for the web hook so oh we have to disable the web hooko and yeah here's the response again and you can see the same log but just for this specific response uh you see that the user input here is just a single message again because I changed the session key so it doesn't have access anymore to that same memory and this is how quick way how you can debu your workflows but let me remove that for I think that's really uh useful as we mentioned before we just insert it into an existing workflow that's a really useful manual trigger manual chat trigger is really useful in those instances I know specifically on my workflows probably on everybody else's too but it's saved me a lot of time so I've restored back the session key for the slack user I'm going to enable this and uh yeah so now we have the agent responding to our St messages well let's actually confirm that because we just did some changes to make sure we won't get really later do you have any question that you would like to get answer from Wikipedia yes uh how old is the king of the nence now can somebody from the chat answer it quicker than our agent yeah because it just failed so that's unfair oh I didn't change the input oh of course there okay so it should work now St now there you go use your answer so it's the work which is great but let's give it some more tools that it can use to yeah help us to be more useful the first one we're going to do yeah there some question I was going to say that just this setup alone it has so much potential for a number of businesses that I've been speaking with for example has the capacity to think about your support internally not just externally so not just customers you can connect your own data and your customers as standard we're all familiar with AI chat Bots for customers and Company but think about internally if you have a slack chatbot just like you've set up and you have it connected to a knowledge database the amount of time that internal users are going to save trying to find the answer to questions just think about production work production plans for the entire year save people having to dig through that data and find out the answer to has X and Y done this part of the development cycle you could ask theoretically the chatbot if you had it connected to that data it's going to save so much time yeah also on I think it's interesting but it's probably a for this for sure it is but you can also do training Now with uh on top of your data and you can train these chat bottles and that's also something that an10 would integrate super well with because you can just uh fetch your data wherever they are coming from be it's SL messages that you want to train you process it somehow and you fire a request to open AI with the with your training yeah your modified training data and you can maybe even use some chain to you know expand that data because you might only have like a small set of that data and you want to increase it by using gbd4 on it to expand that data set so all of this is possible and super accessible yeah shall we continue with the oh yeah for sure I was going to say theoretically you could have uh an agent a slack agent that will evolve with the development of the company so stuff that you might have forgotten about ages ago they will recall it for you yeah it's great so let's give it one more Tool uh and the tool is going to be workflow tool here and what workflow tool is allows you to call another workflow and it allows you to provide this functionality to the a to an agent you explain to it uh how this workflow is used and what query it expects and it decides uh with which point it should or should not be called so I'm just going to name this summarize select threat and we're going to say call this tool to get a summary of specified select threat if response contains done it will summarize successfully right now we don't have that workflow so I leave it empty for now but there are some uh information

### Segment 6 (25:00 - 30:00) [25:00]

that we can already fill and that is going to be again I switch to mapping here and go to web hook and we are interested in channel so we always have to pass the channel to summarize to an agent and Dr drop it from here we set it to channel and another important variable is the time stamp of the or threat stamp not sure which one is it uh but basically thre TS ands I see a number of people just because I'm paying attention to the chat see a number of people mentioning hallucinations I think one of the things to point out here is that by providing your own data you're you give it the potential to reduce the amount of hallucinations because you you're telling it what to look at specifically and it's it has what you're looking for rather than generalizing and hoping for the best per and you're also asking into provide you answer based on that context and you can pull some guarding rails around that to really check if the data it provides is correct so because you can chain multiple of these uh be it agents or chain uh it's very easy to control like what in what's the final information but here so I set up these uh workflow values that we're going to pass to this Tool uh it's Channel and it's this thread uh ID you can see here that I've used thread TS or Json body event DS that is because oh sorry it's not it's body book IEM yeah okay and this is where you're so to whilst you're fixing this just to elaborate a little bit more the general nodes that have been produced for um AI Lang chain in general have three uses you have which we've just shown you the chat bot um or the chat agent uh and then another one is Lang chain itself where you can provide it with um provide it with a source of data and chain it uh and set a number of uh parameters to essentially do the job for you that you would normally take you ages to figure out or a ages to make multiple workflows on and this is what we're trying to do now so we have that set up or just change the name is summarize s thre and uh yeah we can go ahead to start actually building this workflow so open a new page at workflow and uh what it's going to do first it's going to accept so it's going to be called when executed or when called by another workflow and we're going to expect these uh parameters that we set here so we're going to expect them to be Set uh mainly Channel and the time stamp so let's just mock them for now let's channel so this is Channel and this time stamp that this is why you're trying to remember what the channel name is oh you've got it okay what happened oh I misclicked them as always yeah don't worry I do that all the time channel so essentially what you're setting up is whenever an action is performed in the other workflow it will trigger the actions that we're now instructing it to do in this workflow so whenever that tool is called this workflow will be called with the yeah with the inputs for that tool so we're just marking for now we got Channel and uh the threat uh so let's what so what we have to do uh is get all the messages from this threat get all the conversation to summarize we have to check that the uh we have to filter out the messages so we don't include the B messages in case there's already summary and then we have to pass that to the chain uh to summarize that let's do that and for that we're actually going to need one more thing and that is the bot user ID to know what to filter for so we're going to give both user copy this expression here and what we're looking for is authorization do zero let me copy it from here for those who are listening who can't see the screen we're essentially setting up the the next stage of the workflow he's currently looking for the stack in the mapping to be able to put it directly into identifying what area of the information it's receiving that it needs to ignore that it's not here isn't it Gil isn't that his name no but it has a number so they don't give you names unfortunately that would be useful it would be yeah it's not B User it's uh Jason pration user ID right so here and that matches yeah so this are this is our B user so we're also going to send uh set that as mock here and uh yeah now we can start building so as I said the first step is that we need to get uh this conversation so we use our slack note and we're interested in Channel we're interest to get a thread of messages posted to a channel because

### Segment 7 (30:00 - 35:00) [30:00]

that's where it's going the yeah there where message is Select our credential we get channel replies that's what we want so now we have to select the channel I click here by ID because that's uh that's what I know and I drag it here that's populated now it's asking me for message Tim stamp this TS which is what identifies the bread put it here and return all limit 50 we'll just leave it for 50 with and try executing it and we got that message uh which is great because that works but there's nothing to summarize here so we uh we need to use different chat or a different message uh let's do that quickly so we're just interested in listening for test event in the web hook and I'm going to call it now you have to uh more complicated have you prepped yeah I'll have this conversation about bread versus uh C so bread versus croissant yes it's w that sounds like and we're going to ask for to summarize this placee some and by the way this is one of the beauty of using these agents I think because you can just say something like some theet please and it will know what to do like you don't have to program for specific string to be included in your message let that to uh the agents to figure it out but anyway we got this response here I'm just going to pin it so that it's so I don't have to execute this test uh again from slack but I can just always have access to that data and uh we're going to copy the channel right here and prds which is this one we're going to set it here in our mock so RS is the channel the B user is still the same whoops Jesus I didn't save it oh no Cal sin not forgetting to save your workflow save than thank you very much uh I have to execute it so this data gets updated here in slack I'm really interested and now we have five messages from that one Fred yeah I've been pretty busy I was GNA say with myself you've been having a really in-depth debate about bre quest to yourself impressive you're not kidding so yeah those are the messages uh right now we're in schema view so this is an example of one of those messages look like so what we get is uh text we get used there's thread and we also get blogs those we won't be using so we'll just stick to uh text user I think that's it that's all what we need uh you can also switch to table view if you want to like go over these uh these messages or go over your data uh but what we said is that uh we want to filter out all messages from Gil foil because let's add one more message and make sure that it's there so we have in it be really interested to see what it comes up with is it going to be bread Quant as somebody living in Germany I suspect it would be bread but I don't know oh we tried to access the tool which it doesn't have access to yet because you haven't given it the workflow ID correct yes yeah exactly so it's saying it send the message but it didn't send the message interesting oh because yeah we have it set up to send message to the user but what we need to set up is send message to channnel list and again if at any point you are setting up these nodes uh and are unsure on why the errors are occurring because again na n has been quite good in if your workflow fails it will tell you where uh and where the issue is and you can go to that node you can click on the documents and it will help troubleshoot with you by giving you a step by step on how it should be set up so don't feel intimidated I promise it's very much approachable and very in intuitive when you get into it so what I've just done here is I set it to send message to channel by ID Direct in my ID channel ID the message text is still the same and I've added an option to reply to a message so it imply replies in that threat I dragged the thread time stamp and uh I didn't check in the reply to thread so now we can execute it execut it you see it went through MH and uh the message is now sent uh we didn't have to start the VB hook again because now it already remembers the data from the previous execution so now when I'm in the Noe and I click on the execute node it's just going to execute that single note yeah so uh yeah we have the response here that's cool great so now let's switch back to what we're we doing and that is setting up this summarization so now if I execute this summarization uh slack node again we have eight items because some of the uh some of them are the new ones and bot messages like for example this one so and we see this is

### Segment 8 (35:00 - 40:00) [35:00]

the we see sorry we see an ID here of the user who sent the message and we see that it matches with our bot user so we have to filter it out so uh it doesn't summarize its own messages we can do that very quickly by using filter note uh we add a condition here that condition is going to be string I believe because yeah that uh ID is a string and we'll set or we say um user of the message doesn't equal not equal value uh so that b user that's coming from the as input here and we run it uh we see that it discarded one item this one which is the bot message and it kept these other seven messages all right so that gives us the messages that we're interested in now we just need to join all these messages into one object because right now there's seven uh there's seven items so if I would add a chain uh to summarize it uh it would be run for each of these items so we want run just once so we add it as a we merge it using the item list and the connect items here all item data into a single list I execute that and you see that the response is one object which is called data and that is an array of seven items great now we can add our chain story or spacing anlm chain and I'm going to insert my prompt here so your summarization engine based on the messages below provide a short title description and concise summary if context requires it mention sender ID they will be later replace with actual names provide your summary below so here we are iterating over the messages uh you see it's red so we have to just fix that because now the data is in Json data and now we can map over it so we're mapping over these messages and only including like the relevant information for the LM it doesn't need to know about all the IDS of Channel and so on it's only interested about the sender and the message and this is what it's going to summarize so that's done uh we can execute it uh we can't because we didn't connect the model I was going to say so he hasn't got a brain yet need to give it brain it does so I've added the model uh there's some configuration options that you can do the defaults would be fine but let's just set the sampling temperature to zero so it doesn't make stuff up if we expect a lot of or if we expect long trats would be also a good idea to use larger model with a lar larger context window so 3. 5 too 16k let's do that also and uh yeah now we can run the chain see what we got there it is okay responded with the well the summary we can see how it happened well here it's pretty simple because it's just a chain with an input uh and it sends it to L but this is the output and you see that we asked it to uh provide it in uh provide pre properties title description and uh summary and it provided it but it's like an a free form text so uh now we would have to parse it but what we can do is connect an output parser here uh that would tell the LM what format we expect the input and what format or what structure it should output it so stretched output part here we need to provide this adjacent schema of the output that we want so let's say that we want title which is a string and we want a summary which is also a string so pretty straightforward and we want it wrapped in an object now disconnected let's execute it again see how it change and now you see that we got an Json response with title and summary so uh we can already use it further in our uh app and it's pared as we want if we take a look at the logs uh you see that there's two additional steps One Step was getting the format instruction for the uh structure output parser node so it would output the Json that would be pasted into the LM to let it know how it should respond uh you see here here's the Json schema instance you here too and later in the back end this Json the response from the LM is validated that it corresponds to the schema if you want to be like have even better control or better performance of this uh output par Sy what you can do is you can connect AO fixing output parser and this is a note that again accepts the output parser but it also has a model connection uh to which you can connect a model and now it would IO correct itself so if the message from the LM wouldn't adhere to this to the Chason schema that we are asking for it to Output data in it would tell the lamb and it would be using this model it would tell it to try to self-correct and providing an example of

### Segment 9 (40:00 - 45:00) [40:00]

a good schema and as it because it would know where exactly it made um the mistake so it would try to correct it but we don't need it for now it was fine with our structure open any questions the AI I was going to say it's good to know that you could not only use AI to answer its questions and sorts this questions uh but it could fix its own questions based on us just giving it a few parameters and I was going to say for this what you've just created this has a lot of use case potential I mean I I've seen people using stuff like this to scrap uh scrape web data and scraping data and comprising it and having it fed into a frad that they can check on a daily basis um for me for example I can already see how I would use this to draw from many different points on a daily basis uh to be able to summarize things like news or summarize the weekly meeting notes that have happened across the entire week or summarize data specifically for me to be able to put into a report on a weekly daysis and the best thing as well because of the workflows you can even go one step further and tell it what to do with the data after it's done it all and what's also interesting is that it gives you another layer of prompting so uh here we didn't set it but we can also set a description to these fields and we can say that uh great summary written in a tone of drunken pirate and it's it should adhere to that description now I want to see this let's see what it does I want to learn from a pirate what's better bread or a Quant well it failed terribly wow okay maybe that's a good thing you know you don't want these how well informed can a pirate be on a Quon yeah it's it's not paying that much attention to this summary but uh it is able to figure out like the operation around the data very well like if you give it a summary well that was a fair but if you really want the pirate summary you can definitely include it here in this I already know people will not just do a pirate I mean I've already seen somebody post the chat uh question in chat can you make a stream bot that takes in and I already know that this means that we're probably going to get celebrity AI Bots using stuff like this based on whatever we've told them to be for it I look forward to see they come up with last thing we need to do here and we have it done is outputting this data so what the tool here expect here the summarize select thread it expects a single response object or whatever you specify your uh response object as here you see response so we have to make sure that our workflow adheres to that and uh we would do that by adding the set or edit fields and we would add field response and we would use expression again say Title Here new line here title and summary like this and this was what we would output and we have to say that only no input Fields so only to keep this response we have title and we have a summary somewhere so you can even format it a lot to make it a little bit more aesthetically pleasing incredible and there's so many options that you can format it with there's code note and whatnot but this is the simplest way I know because we're getting we're running out of time and I want people to have the opportunity to ask questions did you have any other demos to show or I was just these two one more yeah there's one okay I'm going to run it through really quickly so let's just set this workflow ID here I mean the chances of this working on the first TR is practically zero but I have faith in you I believe you I know you can do this let's remove the wikipedias to make sure it's always using this summarize s that should be good to go we execute it here we go we're going to find out ultimately is it bread is it Quant oh it got an output that's promising and it sends something no way it actually worked they're both below types of breath croissant are known for their complex texture flavor and wow so it's on the No definitive and it's very concise the response but uh you can definitely ask it to produce something more larger uh but okay that's one tool another tool that we're going to connect and it's could be a workflow on its own is adding this uh this data but M it I already have it uh set up so I'm just going to go through it quickly rather than adding these notes individually is going to be quicker exactly so what we have is here so we have the following workflow and uh it does two things so first it's insert data into the vector store so when I click execute it's going to get all

### Segment 10 (45:00 - 50:00) [45:00]

files in docs in a Google Docs folder the folder name is front end o and I've put some uh some PDFs from our knowledge base in there uh so I could ask questions on top of it so I got the file IDs and I got the names next I need to download these files to get it as in binary representation into the N1 so we run this for each of those I get an output here in a moment okay for items uh so I got these files what's cool is that you can view these files uh see that yeah there legit and now that I have them downloaded I want to insert them into my pine cone index so I have pine con index set up uh called Vector data I'm going to use Nam space from then docs 1 2 3 4 One 125 I'm always going to clear the name space before inserting new data uh to make sure it's on there multiple times I'm I've connected the binary input loader which takes care of parsing this data so I tell it where the binary data resides how it can access it and uh I can tell it to split the pages of the PDFs maybe let's turn this off Mo here all my data is PDF so I don't have to decide if I want to use different loaders but that would be the way to switch the loaders like if some of your data would be EOP or text uh you would write an expression that could check for the that would check for the name if it contains the extension or the M type if it contains an extension that you can select or choose this programmatically dynamically so but I know my mes are PDFs and I don't want to split them yeah exactly because they're pretty small files so uh I also have a token splitter which is going to split these files into chunks let's give it a bit larger so there's not this many of them 3,000 Chun or La zero and I have an embedding so that's what these chunks are going to we're going to use the Open Eye embeddings to convert these chunks into mathematical representation of the data then that's going to be put into the vector store that we're going to later search for and that's all we need so now if I execute it see that this data is getting chunked here it's getting embedded and it's finally been set to the vector store you see that here we have four items but uh the output is five items and that's because the some of it got chunk so some of it were over this uh Tok chunk size limit and uh it resulted in splitting docks and here's the page contact if you would be interested what's getting into there you always have these logs so you can inspect what exactly got chunked and even the embeddings if you're into that so now we have it in the vector store so this is a tool so it has execute workflow trigger uh we're not going to use it for now so let's disconnect it uh now we want to ask questions uh on top of that data and that could be done using retrieval QA chain um all these chains you can find here Advanced UI chains uh there's some other notes for the one for the vexer store is this insert we already have it here so what's happening here is we've connected the vector store retriever uh we tell it to retrieve for matching documents based on that embedding of the prompt the query how so I'm going to ask how can I test workflow demo mode uh because it contains some of our questing guidelines uh some of our testing guidelines the data that we just uploaded I have the chat model connected and I have the pine con load note uh which is using that index oh it's not actually because I changed the name so could think we noticed here I'm going to update the uh Vector Nam space Bine namespace to frontend docs 125 which is the same what we just use and now it should be ready to answer that question so if I go execute you see again that it's doing all this it retrieve the data pass it to the LM and here's an uh response and that makes perfect sense here we can see the locks so this again the input output what it pass to Ln and uh we can now so it tells you exactly in the markdown what how to test demo workflows which is something I have to look up all the time and we also RB it again and restent there's a response to make sure it's available to use as a tool now I've connected it I save it and I'll just copy the workf ID and and let's copy that I also would create a new tool so flow tool but this one is going to be named answer knowledge based question call this tool to get an answer based on company knowledge base the input should be concise question uh and work ID is basically correct yeah response so that would give it access to the knowledge base and now hopefully if we save this and we click on execute we should be able to ask a g fo that's anyway was doing something nice so it did exactly

### Segment 11 (50:00 - 55:00) [50:00]

what you said pretty much yeah and that's that that's it for the slack uh bot pretty much so uh you you've been a just for using three different chained workflows you've been able to create a slack chat agent that uses custom data that you've defined for it you've then been able to Source another data point for it to be able to pull from and then you can and then it will give a summarized response based on the parameters you've set and finally hopefully I answered the question from the person who was asking about panc con you've been able to use vectors to be able to enhance and narrow down that field of data that they can pull from even further that's uh that's powerful I'm not going to lie that I already see that people will utilize this far beyond anything we could ever imagine and I cannot wait to see the content that's going to come from the community and the workflows that some of you are already coming up with just for the limited amount of exposure you've had through the alpha the content you've been coming up is great but more importantly if there is something that you think is missing or should be enhanced or you have any feedback that you think will help improve this even more don't hesitate to let us know about it you are in the Discord you can post it here on the forums and we're listening we're always there I know already you can see Yan is lurking and JN is lurking there quite a few of us lur working paying attention to the feedback you have so yeah feel free to tinkle with this and let us know if there's anything that you think we can improve or that's missing and I know we'll jump all over that but the those demos were very informative thank you for letting us see them o leg and I hope you are prepared for the next stage which is answering everyone's questions as you can all suspect there are going to be a lot of questions um and a number of them have already been provided to us beforehand uh so we've had a bit more time to actually go ahead and answer some of those questions for you beforehand um if we don't get round to answering your question today don't fret leave your question regardless uh and what we will do is we will respond to them no doubt Yan and John are already doing that uh but we will get round to answering your questions either after the workshop today or when we find the right person internally who knows the answer to the question who might not be available right now but for now uh I'm going to ask o some of the questions that we've had submitted already uh if you have a question and you want to ask like directly at the bottom of your screen you will have a hand icon feel free to click it and uh I'll invite one or two you one or two of you to come and ask us directly here so first question that was subar to us uh was how to upsert data with uh particular metadata is that something we can do so that's unfortunately not something you can do uh right now if you would use the uh the wector store insert notes that we have but uh I've already started looking into this and look into the way how we can implement it into n10 because it definitely seems like uh very handy feature to be able to annotate your data even further so I think that will be coming very soon I I know that um like I said earlier with the feedback everyone's been giving the Sparks of inspiration that you guys are providing that really are helping direct the development where we're going to go with this so this is a perfect example another example is your next question which is can you use AI uh voice translator Bots uh for example can you send a voice in French and translate it into English yeah so that was very interesting question and I uh got a chance to create this very quick workflow to demonstrate how that could be done with them but it's uh translate voice so you see it's extremely simple uh we use 11 Labs API to generate the French uh French audio so we give it some text that is in fren uh we execute this oh W this is where you find out it doesn't work we've got a bit of lag API okay now uh it gives you the data which is the audio what's cool again wonderful you probably can't hear it can you no we can't hear it but I'm I it's in French trust me French yeah and well you're going to see in a minute because now I'm going to use ssper API from open AI to transcribe the uh that audio file to convert it back to French so we just an API call for nothing pretty much because we converted text to audio and we converted audio to text but it still matches which is great and then we can pass it to a translate chain that looks something like this uh it's very simple translate to English and we pass it the text it's executed the output is English after and uh then we send it to the Elum Labs again to now convert to now create an English audio uh so this would be one way to you could implement this your audio can come from whatever place you want or whatever place we support or has web Hook support so it could be

### Segment 12 (55:00 - 59:00) [55:00]

telegram it could be WhatsApp email uh it could be in I don't know YouTube video transcript whatever so here we have an output and now it's in English incredible yeah I wish I could hear it but I'm taking your word for it it's just the way Discord St takes our audio can only have one input SCE but that in itself I think would be utilized tremendously for as a template I mean no doubt someone will create something like this and it will become a template just there's a lot of potential using that for many different things for example I would use it in the Discord to allow people to be able to talk to one another in Discord from different languages rather than having a requirement to speak in individual chats or individual channels so just off the top of my head I can see how beneficial that had I think home assistant is a great example of usage of this something I plan to do eventually because you can have these small PP like uh boards that connect to Wi-Fi they just need to basically accept audio and output audio and all your brains could be just a wordful that talks to the open Ai and generates audio then could be very powerfully use case are you ready for your next question yes please uh the next question that came up was are there any will there be any separate Branch or we will We intend to integrate this with the core Na and Docker file So eventually once uh it's out of beta we're going to integrate it back into the na10 so it would come with the standard na10 release the package is probably going to be moved out of that repository so it would live in its own repository but it would be integrated uh in the main Na and just to clarify on that point as well um if you actually need to find out more information specifically about what do image you should be using right now to be able to access this we have a website um I think it was n do IO L chain exactly so here all of the information that you could possibly need to know about how you can connect to this how you can get onto it um is available here it's very important that you brought up the Ducker image situation because I know that when right now you need to use a specific one yeah it's called AI beta I believe so that's one way to run it if you want to run it vocally or so those for sure but we also have a cloud version so if you uh go from here you go get started you will be able to create an n10 instance on beta that has uh the sling chain uh integration so you could also run it on cloud if you don't want to host it yourself this would be a great way and it's also much simpler to set up credentials on cloud because you can just click on set up cred let me show it SL quickly because I think that's super cool if I go here click new credentials you see o 2 I just click connect my account and because we're using an domain everything is already set up so it just matter of clicking two buttons yeah so l Ling pages is there available yeah 100% think if you if there's anything that you been left confused about or you have or you want to get involved right now head over to that website and it will give you everything you need to know in order to get up and running now if you aren't already uh I'm assuming quite a few of you already are next question uh let me just find the right one where was it I've gone a little bit too far is there are there going to be any tutorials uh on custom to Tool sub noes um and I think what they may be referring to is uh the workflow C code tool so uh there's already some information about how to use that note in the template so uh that would be good place to start uh there's also docs say let me open that template so I'm not sure which one is meant here the code tool or the Cod Lang chain code no because those are two separate so one of them allows you to connect JavaScript function as and use it as a tool uh so here you can see that we're selecting colors and we ask so this would be a tool to select the color and that's one example but another one much more powerful example is the custom L chain code note I have to open it from here all right yeah um this one is not available on cloud so that's a sell hosted only uh so I'll W be able to show it now because you you're executing arbitrary JavaScript so
