Community Meetup May 7th, 2021: Series A, Product updates, Lightning talks
1:27:23

Community Meetup May 7th, 2021: Series A, Product updates, Lightning talks

n8n 07.05.2021 158 просмотров 3 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI

Оглавление (7 сегментов)

<Untitled Chapter 1>

probably always like what will change is it good is it bad um i can definitely say what will not change is like we will gonna stay focused on building a great community a great product and a great team and um the investors know that actually after all they know that's important is why they invest in it and because we have already all of that and they want to build on top of that and the money we got now allows us to execute to do exactly that being like um meaning like especially like that goes over the top that part it will change what will change obviously now with a lot of more money so we can now grow our team a bit faster this will for example also speed up that the product that will open generally as there's more people to work on it um turbine dollars is obviously a lot of money and so it's a very strong signal to the market it now gives people much more confidence in anything generally and that would help us mainly in two important topics one is hiring so it allows us to not just hire more people it also increases the whole pool we can hire from because people are a lot of little bit scared um starting at the seed company when they already have a great job it's much easier when they know there's twelve hundred dollars there and the company's gonna stay around for a long time um and also does the money itself like provide a lot of stability and security for them which then so grows the pool we can hire from and it means with a bigger pool you can hire from you can also hire much better talent because we can be much more picky another thing um that's what is important for us is the monetization which is obviously very important for anything in the long term that we should do it edit and buy and then embed and then cloud this is very similar actually to how it works with employees also companies they want to know when they build on top of a product like and it and that the company is going to stay around for a long time so and that is exactly not what's possible with the money they know we have can hire a great better team the product is going to evolve much faster and so on so generally it's obviously all super exciting and yeah i think it's a great signal for and at the end of the whole community in general as mentioned before do we grow our team right now substantially so obviously if there's anybody right now that knows anybody or is a head of organic growth or would be great for head of engineering a technical recruiter or developer please apply and send a send them our way and yeah obviously thanks for listening and thanks a lot for being part of the nhn community and without all of you would have obviously have not been possible so thank you everybody thanks sam and uh now we open up the room for some questions so folks yeah like if you have any questions for jan uh please feel free to send them in the chat and uh we will answer them so while we wait for that um jan uh just curious to know um how has this whole uh phase been like what's something that like you're exceptionally excited about the thing is like i said it's such a strong signal the thing before people like we helped us our seed round and that people are very simply scared sometimes using such a product of such a young company i think like that's very strong validation opens up so many doors for us this is super exciting to see like said what kind of talent we are getting also like on the meditation side like so many companies have so much more trust in it and it just makes it so much easier now to to get people use edit and embed and emit in club editing cloud so i think that's the most exciting thing to know being able to execute a lot of things whenever possible before just because we have the long strong signal from the market and from our investors and obviously it's super exciting to have this new investors on board that have many years of experience from building these great amazing companies before i think there's so much we can learn from them and it's obviously going to help and in the long term yeah cool so are there any more questions for yarn so what we'd also do is um you know if some questions come up later we can also answer them during the office hours so while we wait for questions next uh next speaker is ricardo and when ricardo was young he wanted to become a sniper and now what he does is he snipes as repetitive tasks using anything so we gotta take it away okay again i'm gonna play this sniping thing at the end let me cheer my screen with you okay host disable participant screen sharing you should have access now okay let me see okay can you see my screen yes okay awesome yeah i'm gonna thank you for being here today i'm gonna be talking about my journey my nan journey uh i guess i'm gonna start by giving you a little bit of background about me and a software developer from venezuela i've been living in the state for five years specifically in orlando florida uh during my professional career i started working when i began to start working with pitch php then i did a little bit of java and i've been working with a no js api for the last i would say four or five years uh i'm currently working as a back-end developer in en however i've done i will say a bit of everything from the docs to a bit of the core uh developing nodes and helping people on the community okay so let's start okay how everything started uh i had this habit of checking product home every morning to see what cool pros are out there and i remember on october 8th of 2019 i came across nan i thought it was really cool uh i yeah i don't know the project real quick spin it up and then yeah something clicked on my mind uh i really like it was open source was sustainable and it was no yeah yes that as i mentioned i had experience with so to understand exactly what why nama clipped and how to explain you what i was working on at the time i had created this company with a couple of friends uh called clip uh clip is uh you can cop you i could describe it as a barber barbershop microplace so if you needed to hurt a haircut you could go to the app you know look for barbers around you look for the work they've done the review that people have given to them and you know even pay from the app you prepay your appointment and the service provider had another mobile app where they could you know keep arrange their schedule so keep this in mind what i thought was the only pack and the only developer and my other two friends one of uh was doing the sales and the other one was doing the marketing so while working on clip uh most of my time i was working on an api i was the one that you can see right now can you see it the api yeah we have like more than 100 endpoints so you can imagine that was taking most of my time the only developer here but they did this course you know from udemy from hero from zero to hero and then they thought that they were email markers now so they wanted to send a lot of emails like every day and the way they do that is they would tell me to put the data from the database uh and they would give me a template and then i would literally go to the database boxes query the data export the csv go to know yeas import that file iterate over the file and then consume the mail during a mandrel api rest api in case you don't know what mind release is a matching company that is used to send transactional emails well that started okay uh i was doing that just a couple of times a day but then they wanted to do like every day and i didn't have time to do that so that's why when i found an ea i thought okay let me try to solve this small issue maybe i can save a couple of hours here so then i came with this brilliant solution this really complex workflow i don't think you're gonna find something complex in this yeah believe it or not this was like the first workflow i wanted to use any that an aem4 so my idea was okay i'm gonna use the postgreat note which was available at the time i'm gonna do the query there every time i need to play data which is pretty fast because the data was simple and then i'm gonna just map that data to the template on mandarin using expressions that was fine until you know reality hit me on the face and i realized that there was no mandarin out there so i tried to use the http note which was available at the time but the mandarin uh api is pretty tedious i have to uh input all the template variables manually i didn't want to do that i wanted something that i want to do the node to populate automatically those fields for me so that's why we get here we go here to my first contribution which was of course the mandrel uh the mandarin note i did it on october 26th of 2019 if you see here you would think that oh he sent it in october and then got a match it went perfect i thought my note was perfect but then you can see what happened really here so this is me benjamin realized send me these small texts a couple of small things that i have missed keep in mind that at this point in time there was i think there was no documentation hold to develop notes or at least not as we have today so i just did what i did was i just went to the cover and started looking on other notes and start you know using that as a reference that that didn't go well i started you know back and forth with jan until i kinda figured it out and then even though at the end you still have to make some changes and then merge so yeah i guess i could have stopped here because i didn't need another note at the time but i enjoyed that process a lot i mean when you add the node to the workflow to the editor ui and start playing with it i don't know why i just fell in love with that and then i just wanted to keep doing that then i started to look in on the community and start you know creating notes for the community so i will benefit the community and we'll still you know keep having fun so that takes us to what is it next okay when i was there yeah what kept me going to the you know to always keep developing note the fun of developing as i mentioned already the positive feedback that i was getting for the community you know people was always thankful for the notes that i was creating and yeah with every message that i was getting every time i read that i would just get you know on fire and i just wanted to keep going to keep developing more notes and lastly jams hell he was always like pretty responsive with that without that probably would have to stop and you know leave the project at the moment he was always you know we were at that time we were chatting a lot uh through email and in the community after doing that for a couple of months then one day i i said okay let me check how many notes i have created so that take us here i call it my first milestone i never set that goal i just started to count and then i realized that i had done more than 50. so i told that to jan did he put this in the community and yeah i guess yeah i never thought i would hit that mark to be honest i had no clue that i had gone that far uh that was as you can see there three months existing day after the first contribution at that moment and at this point in time i was having uh some talks with giant about working for nan to be honest i wasn't completely sure if we i was going to be able to because jan wanted to build he didn't want to build a remote team and i wasn't able to move to berlin but uh well we worked it out um yeah i can tell you that out of the of all the jobs that i have this is the one that i have enjoyed the most um yeah i mean i get to work with doing what i like uh working with the amazing team of really smart people um yeah till this day everything great so far uh in case you're interested in contributing to nae when tin the um i guess people is not completely sure about that they also think that they regularly think that they can use can contribute by developing notes but uh there's um way more than that you can also work on the dots you probably soon gonna be able to help us with the translations i know one of my co-workers ben is working on that you can create how-to videos you can share on the phone and experience experiences that you're having with nan you can you know if you know how to do something and someone is asking that on the community you can have the person in the community and you can always you know let us know what is your feedback about the project uh i guess that's it uh in case you didn't believe about the sniping thing this is me uh training to become the best sniper that i can um yeah thank you in case you have any questions let me know by the way if you want to find me in the community this is my username if you see this picture that's me cool thanks a lot ricardo um folks if you have any questions for ricardo uh please put them in the chat we have a small q a session coming up soon um and while we wait for that uh thanks a lot ricardo you're an inspiration to all of us and next up is max or should i say dj max so before max was a product designer he was actually a techno producer so max mike is all yours thanks a lot for that tennie and uh if i may just say ricardo great presentation um we have almost an internal joke in ntn is you can't give ricardo an idea for a note because he'll just build it during the conversation this has happened to me a few times to say you know in the morning hey ricardo this could be a cool idea for node maybe haven't even really thought about it too much and by the end of the day he might have already submitted the v1 so to that point something that i forgot to say i challenge you to develop just one note that's not possible when you develop one you want to do another one and then another one you cannot stop i challenge you to develop just one sound accepted hey may i uh thank you so what we're going to do today is we have a short little update um on what's happening in product here at nan um and now that the team's growing this is something that we want to start doing um a lot more um we've been in the community forums a lot more um and just want to say off the bat thank you everyone for your contributions um to helping us guide how we build features how we think about what we're doing next i have gotten some amazing responses in dms i mean we're talking about excel documents for user management of how certain users see this feature really helping us understand exactly what your needs are so we can nail these features from v1 so really appreciate that energy and please do keep it coming um so one thing that we're working on right now we're essentially the finishing touches that we're really excited about is a workflow tagging feature so right now if your power user rented in and you have a lot of workflows there is really no way to categorize those workflows so this feature started off as a work for organization feature and after talking with a lot of our community members really the conversation became um do we roll out with folders or tagging and we had an overwhelming response that tagging is the preferred choice because it allows to do a lot more sort of sophisticated filtering and organizing of your workflow so we listened to that and currently we're polishing up the feature here's just a little sneak peek of what that's going to look like but uh in a nutshell you're going to be able to create tags assign them to workflows um and then when you're looking at your list of workflows you're going to be able to filter down by that this is just the v1 of the feature we're definitely channeling sort of minimum viable products we can be getting features out sooner to you and iterating on them how we see this feature evolving is when we start thinking about things like dashboards and whatnot by having a tagging system there's an information architecture that you'll be able to tap into when you want to generate certain views so you know in future you might want to populate a view of all workflows that error that had this specific sort of tag with this in place we're going to be able to do a lot of those sorts of things so this is an important foundation for a lot of i think very cool things in future which should also really help um organize your workflows if you're sort of getting beyond having just a few um so we're working on getting that ready and then we'll be testing it and releasing that hopefully very soon to all of you um like anything with any of these features once they're out um please do reach out to us in the community forums i'm on there almost every day reading comments and posts to understand you know whatever we do put out how are we iterating on that how are we learning from you and how we're making this better week on week so the next thing that we're uh we'll be putting out is a refactor to our nodes panel so this all started with um the realization that you know something we do every week is put out nodes um this isn't uh maybe this is sort of one of our core things that we expect to do so this time next year we hope we have many more notes we hope ricardo's speech is inspiring all of you to go build out extra notes and ricardo as well so we're going to have a lot of notes to deal with discoverability is going to then become more of a problem as we have more nodes so one of the big things that we wanted to add in here is node categories um if you're looking at the repo itself you'll notice nodes now have a json file we're calling this the node codex this has a bunch of metadata that we can now tap into the nodes panel so we'll be rolling out with node categories um the search functionality is also going to be improved we're going to be able to now okay um i'm gonna wait for max to hit in um so while max joins back uh maybe we could take any um questions that we might have for ricardo or max 10 up to this point so i see a couple of things coming in so jan maybe you could answer this so one of the questions is this stack feature development is being done in some open public branch of github um currently this is i'm not sure if it's already available i think there should be already be a work in progress pr so you should be able to see it and check it out but i'm not sure if it's the latest stage already but you should normally all the stuff should be always in a branch on github so you should be able to see it and until 99. 9 sure it's there cool uh we can maybe drop it in discord uh yeah just got server later on yeah i also look in parallel and post it into the chat window oh perfect cool um so while we fit i'm not sure uh if max is restarting his computer so i also see a message from mcnavine who says dark mode please um and um okay so next of our speakers is miguel i see miguel is already here and you probably know him as one of the most active community members in nhn and he has been programming since 1984 uh which is when he was nine years old with a sinclair spectrum 16k stages all yours begin hello hi people uh well you hear me well yes okay perfect so sorry about the delay i have several problems here with my computer and finally i take my partner's computer and i decide to do my well the video here i'm not really well i'm not really comfortable with apple okay i hate apple so i will try to help this computer well but okay i will try to do it with this meeting okay and let me try to explain what i've been doing okay yes then i said well i love n18 okay because uh it makes my life very easy when i try to from integrate uh any kind of service or do any stuff in the systems okay in our servers okay um well in the case in this case as you know well i developed a node also with the help of ricardo that integrates uh your product that it's our my side hustle browser project at the beginning and now it's my main business okay so what i will do today is share a different way to scrape that from internet okay uh as you know well if some of you have usually tried to scrape data it's there are several solutions out there okay you can try to do manually it's an option okay perfect or you can try to use uh other kind of software like scrappy i don't know if you know that solution but it's a very common solution for scrapping purposes okay but in this case well i decided to try if it was possible with an 8m because it has a lot of features that enables to do it like http requests notes and reading files writing files and things like that okay so i think it's well i decided to prove myself if it was possible or not obviously um we are always talking about small amounts of that okay not huge amount of addresses because and then it's not done for that okay but if we try to do for a small number of pages it's perfectly suitable to achieve what you want to do to scrape data from several pieces okay so i will share my screen this one okay let me know if you see the screen we can yes okay perfect so let's start i do as you see it's scraping like a boss okay so the purpose of this is try to scrape a directory okay a web page directory with lots of links there instead of doing okay i go ahead with first pass first page next page and so on in this case we will try to do in automated way with an area and area okay so uh obviously what i've done i created a local workflow to replace this crappy okay and scrape static whips websites with pagination okay instead of imaginate every page and go and there um we do or i did this with an id okay so well obviously some of you know me i'm your profounder uh okay as well as any community supporter and no creator of prague and obviously this is more personal that kind of vlogger okay um what i usually do at gpro well we usually create tools okay related with data simplifying access to data okay instead of trying to find by yourself the data process on internet we try to locate that data just for people and then try to compare to try to uh prepare the data to be presented to the engineers okay and explore that data instead of doing by yourself describe your style this copy stuff or all the other regular types we do by ourselves and then expose the data for the end users okay and we collect uh several types of data okay we we can work with persons companies or products or other things like financial data and things like that okay and we believe that internet is a big data service okay and that's uh our main purpose of the of your product okay believe in internet as a data source okay um okay but what you can get on internet you usually get multiple data sources publicly available okay but they are not really regularly updated this means that sometimes our contains fast data sometimes uh contains uh unreliable data it depends okay but um well and obviously they are in multiple formats okay you can find them in csv in html in error as us it depends okay what we usually try is okay locate the the right data source and try to transform it okay um well in this case

WHAT TOOLS DID WE NEED TO CREATE?

uh what we have done is try to use energy then to extract the required data to create two tools okay one of them is getting financial data by swift code as you know swift code is well it's a financial identifier that allows to detect which bank is behind that's code is similar to iban but for banking okay and well in this case even code it's like well you know even number is like your account number okay but in international format okay and swift code is what identifies the bank behind your event code you are an account okay so in this case we create uh two tools one for getting financial data by swift code so this means getting bank information about which branch is behind the there which bank name has and other fields like zip codes and things like that okay and another thing that we can get is okay when we try to locate or to look up an even account number what we can what we where we this event number is the swift code okay so this is useful to relate uh both tools with just one data source okay so in this case well this is more uh

WHAT WERE OUR PAIN POINTS?

oriented to that okay i understand that some perhaps some of you don't have the knowledge about what means uh working with some kind of that okay but in this case well i will try to be playing explain in a clear way okay um but basically our main points or our main pain points are related with databases because databases are really updated fastly okay this means that okay you can create your own database and perhaps if the data has it is updated regularly and you don't update your database you will be updated fastly okay this means that you won't return the right data at the right time okay so in this case what we try to do is do that integration for scraping stuff and this means okay time coding and things like that in a classic way okay if we use for instance scrappy that it's a software that allows to do that kind of tasks this means that you have to define what html selectors or css selectors you have to apply to extract data from several html pages okay so this means that uh obviously multiple data sources are hard to be maintained okay because we work with a lot of data sources and sometimes we want to uh reduce the code for future extractions okay and there are several kind of web pages okay and one of the most common webpages that you can find is a multiple page or a list of pages that contain more pages i usually do want to navigate all the pages okay so this is what we have fixed with this of workflow okay obviously how did we fix it first we defined that the most important thing is find a good data source with this mean with this i mean find a updated data source and reliable data source okay what we have done is use this website okay that you can access there if you want the swift's codes that has the same well it has a common format okay with common format i mean look here you see a list of countries okay lots of countries and if you want to scrape all of those countries you will have to go inside and recount you click in every country and then you will get a page like this okay this is a list of swiss swift codes available in albania okay but obviously well in this case there is only one page this is not a problem but if you go to another page with to another country with a lot of swift coach you will find uh a paginate appreciation here okay so the the goal is throughout these pages and extract all the data available here okay so obviously first data source is the

SOLUTION

most important second we try to use a way to instead of coding instead of using um any scribe solution we decide or we try to use a word for builder like general name to boost that extraction instead of doing manually by code doing it visually okay and obviously try to reduce scoring okay like if we use scrubby we will require you doing some calling there okay but in this case using error this is not really the most important thing you will have to do okay you will be able to create a complex workflow in a visual way uh i'm just doing uh a bit of coding to accomplish what you want to do okay obviously

KEY POINTS

as i said you have to start with a list of countries and what we have done there okay there are several ways to scrape well people that usually screams that they are on the internet they need first if they use a solution like experiment scrap usually okay start with a directory or the webpage you are interested on and after that the scrappy program itself what it does is try to capture all the visited pages to avoid visiting in the future okay so what i have done is try to follow the same rule but instead of using scrappy using array game okay and obviously the second step this is important if you want to do a lot of requests um if you're rotation ip rotation it's very important because it allows to avoid uh blocking your iprs okay so for this uh ipl rotation i recommend one solution that is scrappy extracoxy. eo okay that allows to rotate all the ip addresses okay so i have published the workflow this copy describing solution in nam okay you can access there if you want and you can copy and do your nadine instance and test it i hope it will work okay but um okay what i have done i will share the workflow to understand better what i do okay but basically i will try to zoom in okay well basically what i'm doing here is try to okay first obviously the first step is try to get the list of countries and after that i have to prepare the countries with prepare the countries i mean okay every country has a link okay and then what i do is taking those links and using later inside explaining pages loop to request every page later a one-page spare consolator okay so in this case well let me check i will try to already put this okay and check this okay well i will stop here okay because it's enough to understand what it happens we will see some output from the different nodes um look at the ears okay what i have done basically is okay i get the html code from the first page from this page okay and obviously using an html extract what i've done is getting all the countries there this is not only the country name it's all also the link of the country page okay what i've done later obviously is split all the countries in the split image is not and well after that i use you program because i need to translate um well the cone to the name to an iso code that is required for us to identify with short code with a short code the country name okay well in this case as you see albania is identified as a-l okay but it's only for internal purposes okay obviously um while i try to do later okay i have the albanian page and well this is here you can find some magic you will be able to check it later if you want but some people ask me how i do how do you do the pagination between several pages what i use is a well a method the digital static data okay i work with global variables this way i'm able to save um suppose that i'm paging aging i'm imaginating uh let me check this okay and here okay i'm at the first page and then i click to the second page but i want to know which was the well here is the next page so then i have to save the next page and then update the static variable to know which is the next page to be visited okay so in this case what i do is okay the first time i start with the country if no page is the final but if there is a page defined this means that there is a second page to be visited okay well i prepare several things okay and one of the things i mentioned before is that when you try to scrap is something um the goal is try to do the less records that regarded to the server okay to vote for the server this is important you have to be polite when trying to scrap something okay so um and additionally you can apply some timeout or some weight note waiting note to avoid doing a lot of requests at the same time against the server so why drawing also okay so in this case what i experienced okay using the capabilities of reading and writing files in the na1 instance and getting every time or checking if there is a file locally okay with the name of the webpage i want to visit okay well tweet it okay remove with some special characters some things like that okay but what i'm trying to do here is okay there is basically what i what are you doing here is i try to ask myself or ask in the workflow if there is a previous same vector of that url okay if there is no previous category i will request the url and then save to this path okay so later what i do is okay try to read the file okay and then extract data from here as you see well i am this is the first page okay and i get a lot of band names a lot of sweet codes a lot of series branches well all of that to present there okay and obviously what i do later i prepare the documents what i do here is okay i use several data from several previous nodes and then prepare the output data to be inserted in the mobile okay so this means that at the end several documents are created in this collection okay and what else well this is the second part of the magic and basically what i do here is recover the goal the global variable and detect if there are more pages is a variable detected in well extracted in html extract okay in this one okay now here yes let's look at this there is a next book tom variable that will be empty if no more pages are available okay so at the end well as you see well there are a lot of pages a lot of uh entries that and all of them will be inserted inside okay that's the main uh well that's the workflow and how it works and it has no more uh secret okay i recommend you to check the waffle is published in the directory of anything okay just important drive by yourself um okay i'm less close slides for me okay and basically what you can do in the future to do to optimize it okay because as you said as you i just saw um well if i have a lot of pages probably this will be really slow okay because and again well you can do it in a okay you can launch all the queries against the server that but this is not polite so we're trying to avoid doing that kind of uh request okay i'm we are using multiple like this okay when i have launched this well this is a scrapbook cd panel okay and when i have done the guest all the requests well look here you see several requests done through the proxy okay to the internet so instead of doing from the nan ip address it's all the requests are done throughout all the ip addresses of those instances present at amazon okay so this is a way to go

OPTIMIZATIONS

we um more focused on uh optimizations uh about the workflow i recommend using multiple environments it's not the same working in a development environment that you can uh save uh all the uh workflow education okay uh that in a production environment that probably you don't want to save the workflow story or all the workflows because this um spends or uses a lot of uh these capsules or retweets access okay instead of that forget to save workflow history in production environments that's the second option okay obviously using terminal would be faster probably okay i haven't tested but probably it will be faster and perhaps using some screen could be nice i didn't test it but perhaps it could be useful too okay

FINAL THOUGHTS

okay final um what was it are situation so you have to be open-minded so um if you have any problem or you want to fix anything you worry about i recommend using or try to use your imagination how do you want to fix that and then try to use an alien to accomplish it okay because usually you can do anything you wonder but you have to test it first okay i recommend or absolutely use internet as a source okay it depends on the data source obviously but if you use the internet as data source you will get lots of data there you will find lots of that are there and it depends on what are you looking for obviously some that are is public or not but it depends okay and obviously using no code solution local solutions makes everything easier so instead of calling everything you don't have to call the code anything you only minimum the minimum required code okay and that's the that's why i love nathan because uh i avoid to code a lot i only code what i need okay in this case i only called the part of imagination that's all okay um obviously if you do scraping uh yes uh sorry we gotta jump uh just one minute notice we gotta jump to the next okay sorry go on time oh good thank you okay okay well if you have if you want to test your product you have a code there first uh obviously thank you to an a and team okay it was much light it must last it's like sorry every time and all the people doing this okay so you can check the code there and well thank you for the time and i'm sorry about the my time okay oh good thank you so much miguel this is really great um and would we be able to access the presentation as well um would you mind yes sure you can share if you want i have shared you in linkedin i think oh perfect then i will share that in our discord server as well thank you so much miguel okay miguel real quick as you have to say in terms of complexity the workflow that you share is head to head with the with my first workflow with your first workflow yes we should wonder try it try it i will love it i think cool if you like to create data it's perfect because well you can apply that to other similar pages okay awesome thank you so much michael all right and then next we have brian and uh something about brian that you might know uh you might not know is that he worked in the hansa recording studio in berlin for a month where he edited folk music and it seems like we have a lot of musical speakers in the meetup today so thank you so much for joining in brian um ready when you are okay brian i think i have to unmute you yes there we go thanks a lot tony um yeah so thanks for having me and also congratulations to uh to the anytime team on series eight though so you should all be super proud um yeah so let me share my screen i think i need some permissions for that as well there you go great cool so yeah so i don't have to have too many slides um so for those of you who aren't familiar with questdb just to give you a quick intro so we had a great session already with tanya and archel where we had a deep dive but if just in case you're not familiar questdb is an open source time series database and it's built specifically for dealing with time series data and if you're not familiar with what time series data is then this is basically just data that you're tracking how it changes over time so it's probably more common than you would think if you're not really familiar with the term or why you would need a specialized database that does this um but you can think of for example like in stock tickers or financial data this is where they were where they grew in popularity most because people are trying to track how stock changes over time but there was no existing databases aren't really optimized for such use cases i mean there's nothing stopping you from doing this but uh the longer that you're using systems like this the quicker you'll run into performance issues if you're querying huge data sets then you're going to run into some trouble so that's basically where time series databases come from and it's not really specific to time series to financial data so time series data is everywhere basically so if you think of application data or iot data um in internet data communication i mean this is basically an explosion of the types of data that we're interested in collecting and yeah the time series data is basically everywhere so that's why we have specialized tools that are really performant at both storing this and also querying it um yeah so actually instead of going through some slides maybe what i decided we do is we get hands-on with it so this is always the fun part right so um basically what i thought we would do is start from scratch so what i decided to do is run na10 in docker so i'm going to run all of this stuff on my machine so hopefully my my machine doesn't fall over but uh yeah we're running the energy and docker image here we're running just starting quest db up and the first thing that we get when we start questdb is a web console oops this guy here um so right we have a sql editor up on the top and that's where we can write uh our queries sql queries um on the left we have schema explorer and table so there's nothing in here at the moment um and on the bottom this is where our results would be returned let me just enlarge enlarged a bit so you can see so great so we have a database running and we have nhn running so now we need some data and one tool that i really like using for this quite a lot is telegraph and this can be used in a lot of different cases to take metrics from lots of different application sources but what you can do in this case um brian i think we are seeing only a white screen um oh let me stop the share how about now um i still see a screen um but it's um it's i think frozen it's a blank page oh okay that's unfortunate let me try stop to share for a second sure max brian debugs the issue with the screen share um curious to know what you were telling us a bit more about uh what's new with products oh it works is that any better um let's see i think i only see a black screen now oh that's unfortunate let me try to restart it cool um max are you still here hey max okay seems like we have lost max so that's the usual uh curse of the demo right yeah exactly yeah not a demo uh unfortunately not oh that's a shame not enough offering for the demo gods chris says uh that's true no obviously not okay um i'm sorry about that that's really fine do you wanna uh try logging back in um yeah i'll give that a shot being violent invite uh harsher in yeah jonathan says maybe just kill zoom um yeah i think that makes sense so hershell um is also one of our speakers unfortunately ben couldn't make it today uh he also has promised us that he'll jump on a live stream soon with us one of these days to share his story and filling in for him is herschel and hershel mentioned that he used to be um good singer at school time so hopefully we get to listen to some melodies soon and hershel please take it away awesome thank you let me start with sharing my screen okay can you all see my screen awesome yes okay so i used to be good at singing but i'm not but i like music and i did some experiment with it and i build a workflow a tool that i use to control my spotify player and so my talk is titled let n10 play the music i'm just going to do a quick uh introduction because most of you all know me my name is herschel i love to experiment with tech i started doing live streams on twitch and i also have created my own youtube channel so please do follow me over there now talking about this talk i am going to show a command line tool that i built to control my spotify player uh being a developer a command line and a command line nerd uh i enjoy controlling my apps via the terminal so let me quickly show you how it is so this is my terminal on the left and my spotify music player i'm just gonna play some music for now and hope that you all can hear that okay so this is running and i am assuming like i see ricardo is nodding so i am assuming that you all can hear the music i am gonna run this command right now that is spotify first and this would pause my player and over here voila so i built this whole thing using a few tools and i use any tent to connect it but let's now you know change the song and then play the music so spotify next okay interesting let's just try it again so now we have another song we got the name as well as you know the name of the artist as well so i'm gonna pause the song now spotify pause all right and now let me come back to my slides so the workflow is very complex let me tell you beforehand well no i was just kidding it's a simple workflow uh you can see over here it's hardly uh i don't know seven to eight notes not more than that but let's look uh what tools i use to do this so i am a javascript developer so i decided to you know create an mpm package that i can use uh to control my spotify player now there was this uh option of how do i connect my spotify player with my mpm package now i'll be honest with you all i try to avoid writing a lot of code whenever i have the option and just go the uh no code low code path and that's where na10 uh comes into the picture so using anytime i connect my command line tool with my spotify player and the last thing you need is of course a spotify account now i'm going to explain you the whole process of what goes on behind the scenes so i run the command it makes a request to an attend and it intercepts the request it tells spotify to play the song pause the song or whatever command i ran then spotify the spotify player performs that particular action again and it and then you know creates that response that we saw so we got the name of the song as well as the name of the artist and then we display the message so what happens over here we run the command the orange box is over here uh shows n10 so n10 intercepts the request tells fortify what to do spotify player does that action and then and it sends back some information to a terminal and then the terminal displays the information so talking about the features that this particular command line tool has it has this very nice help command which can give you the list of all the commands that i have for now integrated so you can see there is the option to play a song pause the song change the song maybe go to the next song in the queue or go to the previous song and then if you do a spotify card it gives you the information of the current song so all this information again is just you know a web book recall to an attend and it ends the process over there and then it give displays it out on the terminal let me again share the workflow over here so the if node over here it just checks if the command is start music so we go to the true branch we get the current uh the current id of the song and then we just tell spotify to play that song and in the set note i am just getting the name of the song as well as the artist and if the command is anything other than the start music i dynamically pass this operation in n10 so whatever the command is it will dynamically been fetched and the operation has been recognized over here and then i get the id perform the operation and then it just set a step over here again this set node is similar to the previous one just get the song and the artist name so yeah that's it so there are a few features that are that i want to work on for this a few of them uh is to just know this play the song from the given name so let's just say i want to hear a song by ed sharon and the song might be photograph so i say spotify play photograph and then you know it goes to n10 finds that song and plays it in my spotify player so that's one of the features that i want to add the another is to you know get the list of all my playlists the ability to select the specific playlist and then you know play the songs from that playlist so these are the two features that i still want to add but if you have any other features and if you would like to try it out you can always check it out check out it check it out on github i'll be sharing the link in the chat soon so here's the github it's open source you can try it out i have listed all the instructions as well on how you can install it and if you just want the workflow you can get the workflow as well awesome thank you so much russia this is amazing and i must have never tried out dynamically passing an operation before so that was quite great to see cool so next up we have brian um brian are you here okay i see brian yes i'm here thanks dani uh let me give it another shot i think you need to enable screen sharing yes there you go great okay so how does that look perfect you see your screen okay so um this time so i killed all of my docker stuff just in case that was the culprit but if you want to try it out um that's uh that's how to get it up and running it's just two commands to get anything and then um quest db are also running um so it depends how you're sending data into quest tv so there's port 9000 that's for the web console um eight one two is for postgres which is what the questdb node in ntn is using and then 9009 that's for influx line protocol which is something specialized for very high throughput um yeah and what i can do is show you the workflow um so obviously like because i killed all of my docker stuff this isn't um working right now but i can at least talk you through what it's doing so it's not quite as epic as miguel's uh epic workflow but this basically what we're using here on the top is the na10 node and this is using postgres and right there we can write some sql to directly query the database so in this case the telegraph agent what that does is let me just enlarge this so you can read the text a bit better so the telegraph agent that collects system metrics so typically what that's used for is maybe you have 100 machines or maybe a thousand machines and you have this um client that sits there and sends it towards some uh database some uh some server and you can i like using this one a lot for demos because it picks your system your local hosts system info and you can send that directly to questdb it's very quick to set it up and get some dynamic real data rather than trying to mock some some stuff here so if you start the telegraph agent it'll create a table called cpu and here you can just get the most recent 10 records here so that's i mean one of the benefits of being able to use quest tv in here is that you have sql so you don't need to learn any kind of query language that takes a long time that's maybe a little bit esoteric so sql has been around for about 50 years it's pretty much every answer that every question that you have is going to be answered on stack overflow if you're not sure about how to do something so there's a ton of really good resources and this is why it's a great language to use for asking questions about what your data looks like um yeah so then on the left we had a crime job so this one was scheduling the whole workflow um according to i think this one was using every minute but it might be useful to do this every hour and what this is doing is checking our table for cpu metrics then we're making a csv file from that and uploading this to s3 so that way you could use this for maybe backups if you want to have s3 as a place to store uh hourly reports or what i think would be probably most interesting i mean if you're collecting information is not really the hard part right so if you're collecting information from 100 different sources or maybe if you're collecting info about maybe it's financial data take your data getting the info is the easy part but trying to make sense of it is the tricky part right and that's where you're actually trying to get rid of the noise and actually make some useful insights from it so i thought this one was interesting where here you're able to use the rest endpoint and just to also send sql and here we're selecting we're grouping machines by where the system cpu usage is over a certain amount so this would be in cases where let's say you want to find outliers or what's my top three maybe misbehaving devices or something that's using a lot more system resources than it's supposed to be doing so quite often you have in iot situations you have you want to have your top five devices that are maybe using the most amount of data or the top three devices that are active or ones that you haven't heard from in a while so being able to write queries like this to generate reports of the top five troublesome devices or ones that you need to take a look at i think this is quite useful so yeah this example uses per minute but maybe you want to have an hourly report or daily report um yeah and like i said here this query is fairly simple for making a top three devices by system cpu usage so i think that would be something good for an i. t use case and yeah i think there's plenty of ways that in between this query where you're fetching all of the data to also maybe use some other nodes for enrichment purposes as well so let's say you have a lot of raw information going into a database and somewhere else you have maybe a spreadsheet or a table with data that's you want to associate with that so this could be something like addresses or geo geolocation that you don't want the devices doing but you want to maybe change it maybe use this as like a enrichment before saving it to s3 so yeah this was the workflow so of course like miguel i can share this one or drop a link to it in the discord chat or if there's someone who has any questions about how this is working then please ask away um yeah so why should you use it in your workflows i mean if you're analyzing financial data um if you're working with iot data this totally makes sense because it's the right tool for the job it's a high performance database and that's something that probably a lot of you can find out for yourselves we have a demo that's public publicly available there we have 1. 6 billion row data sets that you can query in a couple of milliseconds so really when i say it's high performance it's super fast so i think for data enrichment this would be a really good use case in some managed and workflows if you want to write in sql this totally makes sense if you really need high throughput so if you're sending from hundreds of different clients into one table that's what you want to use and then of course you have the option to use postgres that's what the native nhn node uses but you can also use rest like this workflow does yeah and try out the demo so um you can go to the demo. stb. io and up here at the top we have this example queries selectors so here you can try out a lot of interesting ones so this seasonality of weather is a good one so this weather data set is eight years of weathered weather data and yeah we're querying 1. 6 billion rows in one millisecond and this is with aggregates on top of it so it there's quite a lot of fun stuff to play around with there so i recommend you check that out so yeah that's it for me cool thank you so much brian thanks so it's great to uh see and um we are nearing to those the end of our time so before we start the office hours section of the meetup which would be q a we will uh answer questions and before then i would like to extend a big thanks to our speakers for their lovely sessions and braving through all the technical issues that we face today and uh thank you to all of you for also taking out the time to attend this month's meetup but it doesn't really have to end here let's continue the discussion on our discord server and in case you aren't really a member yet of the server here's a link that i dropped in the chat so that you can join in and we are also holding our first ever icebreaker on may 21 at 5 00 pm cest where you can meet other members of the nhn community during multiple five-minute one-on-one chats discover what they're working on and learn about best practices for them so if you haven't already signed up i'm gonna drop a link to that as well in the chat as well and um yeah in case you are heading out i wish you a great weekend and for those of you who are staying in uh please send in your questions and while you're sending in your questions max would you like to uh jump back in and maybe shake some of the things that you're showing us before well thing let's try this out now round two okay so where we left off uh just to recap um what we're working on right now is we're showing up work for tagging and releasing that and that should help really anyone who has more than one than a few workflows um what's up next is the nodes panel so as we're saying we want to be creating lots of nodes and so the problem for discoverability for nodes is going to increase over time so adding node categories and we're going to vastly improve the search ability of nodes for each node you'll be able to search the description and alias terms so right now if someone types in javascript they're not going to see the function node so with the aliases it's going to allow us over time to add all these other terms uh that might be helpful for you finding node at the same time since we're doing this refactor um there's a few ux improvements and also for new users when you don't have a node we're going to have various hints on what you might be able to do so the http request node a web hook or request a new node um from within there beyond that we've had uh some new team members joined the team and uh most recently uh david has joined as a new head of product um which means we really have doubled the bandwidth that we have on our product and design team and so we're starting to have a lot more concrete conversations about a road map internally um we're still sort of codifying that out there's definitely requests from the community about having a public roadmap and we hear you there but what we can show right now is some road map themes that we're focusing on the next couple months so one of the big things is getting nan to version 1. 0 um so big push there is going to be in ensuring uh that we have a nice stable 1. 0 for mission critical production workflows and part of that as well as you know we're very conscious whenever we make a breaking change on n um because your workflows are important and they're solving real problems um but so as we prep for 1. 0 there's a few band-aids that we'll likely be working on ripping off and a lot of that's going to come with uh consistency in the node experience so certain nodes have different rules um when it comes to using them and that disparity between nodes you know as we grow basically puts a lot of load on users to have to figure out that out and track that so we're going to be working on sort of standardizing node experiences and with experiences meaning you know the input and output and the expected result there um so that should help because when we codify that that's something that we can also share with our community for your customer knows and whatnot so there's going to be a lot more best practices coming from that another big feature that we're working on right now in sort of research and i've been talking with a lot of community members so thank you very much is user management our plan at this time is we're planning out a rather in-depth roadmap so that it can meet the needs of big teams and lots of other things and have that roadmap set so we know where we're going um but then iteratively roll out phases so the first user management that come out will be rather simple really speaking to the need of i would like to separate users so they can have separate workflows and credentials and be able to share them and then from there we'll start looking at um adding things like groups and workspaces and all these sorts of typical things that you might see in a user management functionality to enable larger and larger teams to work together or to be able to silo various projects on that note in the community forums if you type user management and hit enter there is a user management and privileges thread where people are actively giving their sort of requirements and their needs we're checking that all the time so please do contribute to that because while we're still in research um a lot of the things that we're going to commit to in the v1 um will be sort of the foundation getting that out there and so all that insight will still be very helpful as we plan that roadmap for the rest of the feature um another big push for us uh is making anything more intuitive i think this is obviously um something that will happen over time and we'll continually doing but we've identified there's a few patterns and whatnot where we do see there's various fixes to improve the experience not just for new users there's definitely focus on that so we can get more folks using unity but also focusing on power users um if you've ever you know you use an app um very deeply the smallest change can really affect that experience there so now that we have this more bandwidth on product we're starting to meet with a lot more users understand how you're using it and start optimizing for that um so to recap in the short term workforce tagging is coming out real soon next up will be the nodes panel refactor um and then um you know while those things are happening as always with weak releases there's lots of small improvements and then nodes coming out but once uh workflow tagging and the notes panel is out the really big thing we'll be tackling is user management the last thing i wanted to say is thank you so much for all the feedback and advice from users and pain points that we're getting on the community forum as i said i'm trying to be there in there every single day and i'd just like to have a big thank you to everyone that already has given me a lot of insight when i message some users i'm expecting maybe a sentences back and i'm getting excel documents and charts and all manner of very helpful resources that definitely take time to make so we really appreciate it um i read every single one um sometimes we're even overwhelmed by the response if you don't get a reply back in the typical naden fashion of a few hours it's because we're spending time on each reply and getting back to you but we will so if you have feature requests or there's features that we've discussed that you think you have some specific requirements that you'd like to be considered please do pop into the community forums um and in the features requests section is where you can find all the various things that people are requesting or that we're proposing to make um and yeah i'd love to hear from you um so that's that's sort of an update that we're doing from product here um over the next months i think something we're going to be trying to do is iterate on this format and figure out what are some good vehicles for making sure the community is in the loop on what we're doing here product because the relationship between you know product design and the community is going to be very important so we build the right features for you um so just watch this space but it's very important to me personally as well that you feel like you have a voice in the community and that you can see your input translate to a nice usable feature that gets that gets put out so thanks again everyone for all your support so far and all your insights awesome thank you so much max uh folks if you have any questions for max please feel free to drop it in the chat so i see a few questions already that were unanswered so for max um i see one already will the user management feature be available only for cloud no absolutely not and i would say as the general rule none of these core features that we're talking about that are important um will ever go out just for cloud uh that's sort of an important tenant from day one within it in cloud is that we don't want to create this premium naden that's just pay to play there we want to ship the feature for core and then put it on cloud so i would say on average it'll probably even be a slight delay to getting the feature on the cloud because releasing it on call getting the feedback from our self-hosted community really improving that feature hardening it and then rolling it out so absolutely user management functionality will be available to sell forces probably even a bit sooner than for cloud because we have you know infrastructure that we have to add that to perfect thank you max um then i see a question for hashem herschel how exactly are you getting response into the cli for your spotify tool so the web with the webhook node uh there is a parameter that's called response method so if you click on that you will see two options that uh the one is unreceived and the other one is last node i'm just taking a look at my workflow once again to make sure yeah so if you select the last node option uh you can then select what data you want to send so you can do first enter json all entries are the first entry binary now the reason that i have the set node in my workflow is because uh so what the last node parameter does it only sends the uh information that is in the last node so in my set node i am getting the song name as well as the name of the artist right so now what happens is when i make the call the process happens and then the set node gets that information and then this information is sent back to the request so now instead of getting the uh default workflow got started message we get the name of the song as well as the artist i hope that answers the question cool thanks russia um then i see a question for brian uh sorry for ricardo so ricardo what is the shortest time you took to build a node ricardo i'll have to mute you there you go what is the shortest time you took to build a node uh hard to say like an hour cool something like that what factors does it depend on how long uh depends yeah to be honest mostly on the uh most of like they would say 90 percent of the number that we have just consume and raise api so depending on how good the dogs are that that's like a big factor because i've seen nodes that are very simple i don't know five six parameters but there is no documentation whatsoever so that makes it like real difficult to do cool so yeah i would say the dogs are the more the most important thing makes sense um and then i see another question on what knowledge does should one have to get started with creating nodes in na10 um yeah i would say if you have some sort of small experience with any programming language and you know how a rest api works you should be able to pull that off in case you you want some sort of guide there we have a tutorial on the web page how to create your first note and if you follow that up you're gonna be able to have a sense of what if needed to create a note uh if for some reason that doesn't answer your questions you can always you know ask us on the community and we will be happy to help cool brian i see a question coming in for you uh it asks if you can use the kafka node in nhn to insert data into quest db oh there we go yeah thank you yeah so we have um support for kafka using the jdbc connector so yes i haven't tried out the kafka nh note but um yeah we have some users using kafka for ingestion it's a quest tv already yeah cool and um i have tried out the kafka note and you can ingest data into question using that so that should be possible then i'm definitely going to give it a shot awesome and uh i see one other question for mikael let me unmute him again so the question is where exactly scrap proxy comes in uh i mean where to integrate possible inside http request node or somewhere else i see mccann has already replied a bit but miguel do you want to elaborate a bit yes well related to scrub proxy um you have to run by docker also so i have provided the it's the i will share the url okay it's this one there are several samples there okay you can run by yourself like n8n it's absolutely the same so um what it does is run this inside your locker and you have to set up to line certain instances for instance in amazon but other icps are supported like digital ocean ovh and things like that okay so what it does is as i said it's like a proxy with several machines behind the proxy and every machine has its own ip address okay so this is perfect for in well and additionally it rotates automatically the machine this means that power on and power of the machines automatically after some time so all the ips are rotated automatically okay so this is nice and related well i saw another question uh um more about me forward before okay related to well dynamic pages okay yeah that's right uh if you want to scrape the dynamic pages the workflow ratio rate doesn't work for that okay it only works for static web pages so what do you need well i usually recommend using a another docker image like browserless. io okay it's for this case okay and you will need some uh behind nan okay between an alien and the browserless server okay browser server only understands uh ws protocol this is websocket okay and you will have to interact with this protocol using a library called uh operator i think it's let me check the name the exact name okay but you will work with javascript to interact with the browser lesser okay um okay i think it's this okay you have the puppeteer here there okay so you will have you will need this um an api or a minimal code to interact with popular from an area and you will be able to scrape the dynamic websites in this way okay so that's the easy way to do it or you can find several services that allow you to scrape dynamic web pages okay and that's the easy solution i think cool thanks a lot michael uh that's helpful all right since i see we have uh no other outstanding questions thank you all very much for joining in this was really amazing and uh yeah thanks for bearing with us for during these like technical issues but it was really lovely to see all of you and please join in the discord server if you have any questions uh ask them in the community forum would love to hang out more and hope to see you in the icebreaker soon so until a few weeks later all right folks have a nice evening have a nice weekend bye

Другие видео автора — n8n

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник