# Automating smart factories to monitor machines with n8n and CrateDB 🏭

## Метаданные

- **Канал:** n8n
- **YouTube:** https://www.youtube.com/watch?v=a1nY54_fyu0
- **Дата:** 16.03.2021
- **Длительность:** 55:46
- **Просмотры:** 1,813
- **Источник:** https://ekstraktznaniy.ru/video/15868

## Описание

Over the last few years, an increasing amount of businesses have adopted automation in some form. But have you ever wondered what automation looks like at the scale of multiple factories?

Automating the process of monitoring these machines and sending alerts can keep teams updated in real-time and prevent potential downtimes. However, smart factories have a diverse collection of systems and sensors. n8n helps you bring these together by connecting the disconnected.


In this webinar, we will show you how to automate the monitoring of machines' health for real-time insights. We will also build an automation workflow for industry 4.0 using n8n and CrateDB.

## Транскрипт

### Introduction []

hey good evening everybody and welcome to the webinar automating smart factories to monitor machines with n10 and createdb my name is tanepand and i'll be the moderator for the webinar today i have with me harshal and georg who will be the presenters for today so hershel is a junior developer advocate at na10 he's an author ambassador and a mozilla representative he enjoys experimenting with new tools technologies building cool stuff and sharing his knowledge with the community and we have york with us who is the head of customer engineering at create. io with a background in technology and process consulting he is now driving pilot projects and transformations working hands-on with customers and partners using one of the leading database technologies for machine data before i pass over to herschel and keorg a few housekeeping things feel free to send any questions that you might have via the chat panel on the right and we'll answer those questions at the end of the webinar we also have a couple of posts also available on the right side that will be asking throughout the presentation so watch out for them also if you have a problem with sound if you can't see the presentation or anything please let me know through the chat and we'll fix it straight away and with that i'd like to hand over to our presenters hershell and george thank you taray for

### Overview [1:22]

the introduction hello everyone and welcome to the webinar uh so we have seen wide adoption of automation in businesses over the last few years and in this webinar we are going to look at what automation looks like at the scale of multiple factories we are gonna build a workflow that will monitor the health of the machines uh but i wonder georg how did people manage to monitor the health of them of their machines uh back in the days sure i mean um if you look at factories and what has been done in the industry for the last decades um we certainly have seen um monitoring solutions that are in place for to monitor the health of machines um also there has been some um automation technologies used um within the last decade and basically building systems that um are not really connected to any other systems but uh to just work on their own and yeah i think we right now experience um some sort of change um in that regard that we see more and more connection between these different uh different players awesome so as you mentioned we had different systems for these machines and they were kind of isolated in themselves so it was kind of difficult to connect different services together to build such kind of automation workflows so we'll learn how we can use anytime to connect these services together and build our automation workflow for a smart factory now talking about smart factories a gear again can you share some light on what

### What are smart factories [3:00]

smart factories are yeah sure i mean um again um for decades basically people try to automate things in the industry to produce more efficiently and produce more effectively um so they have con they have been the rise of in of in the last decades of using plc start the mat uh to automate all kinds of stuff you have these robots in factory lines to be more efficient for certain tasks but all these systems basically were not inter-interacting with each other or only very limited let's take for example this this robot uh let's say you think for example a robot um basically in the past the robot was programmed to do one thing and very efficiently in order to do more complex things and to work on different kind of products you could either reprogram this robot to do this stuff and to do all kinds of stuff or you really um integrate other systems and i think that's the part where this smart factory comes in uh is basically the integration of various kinds of technologies like you see today we your robot is not only doing one stuff uh one thing but um you're really connecting maybe a vision system to your uh your robot that is talking about using tcp and uh it technologies that weren't there in the past that's interesting so we are connecting uh other manufacturing operations with the it operations right exactly um yeah if you also see in this example you might easily have this factory line where we're producing uh goods um and um in the past maybe um and you then see it going to the truck and to go to the warehouse in the past um all the systems were basically uh separated and a user had to manage it nowadays basically you can have you have your rfid chip built based on place in the product and the product knows from the beginning to the end where it really is what really happened to the product and let's say if the product in the end passes an rfid gate to the truck it might get the warehouse management system automatically get triggered by systems and i think um these technologies just enable that um yeah i think one of them we're gonna see today and uh gonna see in practice today awesome thanks gab for the explanation uh now that we have the basic idea of what smart factories are let's not waste more time and you know let's get started with learning the tools that we are gonna use today now to build our automation workflow we will use you guessed it right and it does so anytime is a very good license tool that helps you automate tasks sync data between various sources and react to events all via a visual workflow editor so to use anytime you can either host it on your own server or you can sign up for and then cloud to quickly get started to learn more about anytime uh you need to first understand the basic components now there are two components in anytime the first is the nodes so nodes are the building blocks of the workflow and currently there are 260 plus nodes that you can use for your automation for example you can see over here that we have a node for page at duty which helps us to connect uh to our page duty account we also have a node for createdb which helps us to connect our na10 instance to a great db database now we can connect these nodes together and build our automation workflow the other com key component in any time is the workflow a workflow is a canvas on which you can place and connect these nodes together and a workflow can be started manually or they can be triggered by a trigger node and a workflow ends when all the active and connected nodes have processed the data so now that we are going to build a workflow that would process the data create an alert and ingest the data to uh to a database what kind of steps do you think should we uh get started first with i mean the first thing for any connection in a factory uh of course is of for any workflow in the car in a factory is basically starting with the uh connecting the data right i mean you somehow have to gather that data from the machine before you can work on that right okay awesome so let's get started i'll switch to my na10 uh canvas

### Creating a workflow [7:45]

so this is the anytime uh editor ui and by default there you will see a start node your start node uh cannot be deleted and you can start building your workflow from this node and all the workflows that you build using the start node will be you have to execute them manually since we want to uh since i don't have access to a real smart factory at the moment i'll create a workflow that will generate some data for us so we want to uh assuming that this sensors would generate this data every second and give us all this required information i will use an interval node this node will execute the workflow every second now we will be getting a lot of data from our sensors uh again georg can you help me understand what kind of data do we usually get from sensors yeah typically in an industrial environment we might see something like a temperature because we it's always a good indication for the health of a system over um yeah if the temperature is too high this probably it probably means that there's something wrong with the machine yeah god okay uh so let's get uh started so we will be uh getting a few in we will be generating a few information for our use case uh to identify our machine separately we will need the machine name we'll also get the machine uptime we'll set the temperature as well as the timestamp so to get this information uh i will use the set node now a quick question for all of you uh in which language can you write your code snippets in anytime i've said a pool just let me know awesome i see a lot of folks are answering the question and yeah uh so we can use the uh we can use javascript to uh to write a custom code in any time and this is what we are going to do since we don't have uh the temperature and the uptime we will use uh some javascript functions to generate these values now in the set node you can again set any kind of value you want you can either set a boolean that's true or false you can set numbers and you can set string so uh for this workflow i'm just gonna to set all the values to spring and the first thing i want is the name of the machine this will help us to differentiate our machine the next is the temperature we will use the javascript function over here to generate this value now let me uh briefly explain you what this value does so the math. random function is a built-in javascript function which generates a number between 0 to 1. now we are multiplying it by 100 so that we get a number between 0 200 and then we are basically converting that number into an integer using the math. flow function we also need the machine uptime so i will uh use the same javascript uh code snippet to generate the machine up time and lastly we need the time stamp which will help us to understand at what particular time this value was generated okay if you have uh if you are if you have the access to real sensors you don't need to build this workflow you just have to receive the data from the sensors and i will show you next how you can do that now if i click on execute node you will see we will generate all these values for us the next thing is now that we have these values we want to append it to a queue so we i am going to use a active mq for this for that i will be using the amqp sender node i already have set my credentials for the node now if you are using a service in any tent that does not uh that uses a credential and you are not sure how you can configure those credentials you can always open a documentation go to a documentation follow the steps and get started with creating your own credentials and connecting anytime so this is also what uh we would typically say you see today in a smart factory that we have some sort of broker be an amqp's broker or be it an mpg broker um really depends on the use case or what you like or what you prefer uh and um and it and basically integrates with all of them um yeah and it's just really easy to to get the data into it right yeah exactly so now you see i got a message over here which means my data got added to mqp uh but till now i was running each of this node manually so the next thing i want to do is save this workflow so that it automatically runs every second so i'm just going to call it simulate data workflow and i'm going to activate this workflow so that it automatically generates these values for me awesome so now the workflow is activated and we are getting the data uh which is generated every second now that we have the data which is generated every second uh what kind of database should we use for such uh for such a use case yeah

### Choosing a database [13:40]

good question i mean uh lots of people might start out with a typical sql server or the database they know and used before um but in the end especially for larger use cases um you're really looking uh want to look into a database that is really built for time series and built for these large amounts of data time series data and one thing or one one of the solutions is definitely great to be a solution that create ao offers and we are really proud of so what makes it different from from other databases is you can still use sql that you know of you can use postgres sql to connect to grade db the pasqual sql right protocol but in the back end createdb is really built to be fully distributed which means in this use cases where we have machine data um which might start from uh from just one data point by one second but might go up to millions of data points per second um it really is necessary to have a database that can scale with your use case and that's uh really what we are aiming for with kdb so on the one side you can really ingest huge amounts of data and on the other side you also want to process this data because just storing this data is basically easy but you really want to query fasten it and this is possible to through the architecture free to be that this really fully distributed in all sense from the insert to the indexing to the query

### Building a workflow [15:17]

query awesome wow uh okay so this is what we are gonna do now we are gonna build a workflow which will get this data from our queue and then we'll ingest this data to create db so let me open a new empty canvas for this so now that we have all this data com coming to a queue in our active mq uh what i'm going to do is i'm going to use a trigger node and the especially the am qubit trigger node which will trigger our workflow whenever the new data is uh gets added to our queue again i'm using the same credentials that i have in amqp and i i'm using the same queue the first thing that you have to do whenever using a trigger node is to save your workflow this registers uh this registers the trigger handler with the service and when i click on execute node this will give me the data from the uh queue if you see over here we got some data over here another important thing that we have to do is go to the options json pass body and toggle it to true this will convert the string that we receive from the queue towards json object so that we can use it later on in our workflow now uh this workflow will do a couple of things it will uh the it will convert our temperature from shelters to fahrenheit and uh ingest all the information from the sensors to create db it will also uh check if the temperature is greater than 50 degrees or not and degree it will uh create an alert using pager duty and also uh send the incident uh information to create db as well so we'll break it down and we'll work on the first part which is to convert the temperature from shelters to fahrenheit and store the information to create db to convert this uh temperature i am going to use a function node so the function only for the function node is again another a key component of ni 10 which helps us to write a custom javascript code snippet now i'm gonna simply uh paste the code that i already uh wrote earlier now let me explain you what it does so the uh the this particular uh code over here gets the temperature from the previous node that is our am qp trigger node we multiply it by 1. 8 and then add 32 to it basically uh using the calculation over here the conversion calculation and then we append the uh temperature in fahrenheit to our items now if i click on execute node this will append the temperature in fahrenheit and now we can get the data the next thing i want to do is i want to set this information so that i can add it to ktp so again i'm going to use the set node and the data that we want to set is the machine name now we are refreshing referencing this values from the previous node so i'm gonna select alexpression in under the variable selector i'm gonna select current node input data json and then in the body i have the machine name i'm gonna do similar uh i'm gonna follow the similar steps for other values as well so while you do that i might ask another question to other audience and it's gonna be about how to connect with great db sure and only if they choose the right answer we gonna use the we can connect to ktv right now right because we are exactly using that technology from n8n to connect to great to be i got the temperature firing in fahrenheit as well and then lastly we need the timestamp awesome so now we have all the values that we need to append uh to our database now if i execute this node you will see that we uh we will get some more data so we got the data from the previous node as well as the data that we set in this particular node but we only want the information that we are setting in this node and to do that you can toggle the keep only set to true and this will only set the information of this particular node so if i now click on execute node you'll only see the data that has been set in this node so here we are so we now have all the data that we uh want to engage to create db now before moving uh to add in to ingest the data to create db uh we need to first create a table in creativity uh so how do we create a table in createdb

### Creating a table [21:05]

createdb well basically the end is options um as i said uh as we know uh you can interact with pascal sql and one way to do it is to use the admin ui so basically our built-in um ui that comes with every create the b note and um yeah i've seen you already have set up some tables there right yeah i already have set up subtables but let me just quickly show what the table uh what the query looks like so i already have a table named machine data so i'm just going to call it machine data one and we have this all these values that we need right so you can just use the yeah standard sql to create a table and create db as you would know it from your postgres server you can do that in great db as awesome so well now have a machine data one table as well cool so now i'm gonna uh go ahead and ingest this data to our machine data table i am using the cloud uh createdb cloud so i'm gonna select my cloud credentials now since we want to insert the information the operation i have selected is insured the schema is doc and the table name is machine data now in the columns uh i can specify all the columns that i want to append the data to so i'm just going to expand this so you all can see what all information that we are passing over here so we are passing the temperature in fahrenheit temperature in celsius the machine name the machine uptime as well as the time stamp now i'll click on the execute node button and this will execute the uh execute the node and append the data to create b so we completed the first part of this workflow which was to get the data from the sensors convert the temperature from celsius to fahrenheit and ingest the data to creativity now will be moving ahead to the next part of the workflow is to check when whenever there is an incident report that incident uh through pagerduty and also ingest the incident data on creativity sounds good okay so for this now we already are getting some information uh from the mqb uh node we want to check for the temperature and for that you can use the if node which is a conditional logic node what i mean by that is you can set your conditions over here in the if node and you can based on your conditions this node will written either true or false now since our condition involves a number i'm gonna select number again i'm getting this value from the uh from the amqp trigger node temperature celsius 0 and i want this to be larger than or equal to 50. if i execute this node we won't be getting any output for the true branch since the condition is false over here so if i go to the false uh output you can see the output over here so let us execute this a couple of times so that we get uh the temperature greater than 50 or at least 50. okay so we got 30 degrees awesome so we now have the temperature of 68 degree so now if i execute this node we got the output for the true uh branch so now here is the thing so if the temperature is greater than 50 we want to create an alert so for that i am going to connect the true branch of the node to pagerduty now again i have already configured my credentials for the node but you can uh select whatever uh but you can create your own credentials and again if you're not sure how to create these credentials you can hop on to our documentation follow the steps mentioned over there and create your credentials and connect to anytime so i want to create an incident the title can be uh so now my title will contain both static information and dynamic information so i'm going to add expression and i'm gonna call this in incident with and then i'm gonna get the name from the previous node so what this does is let's just say we have another machine which uh had an incident right which had a temperature greater than 50 degree so this value the name of the machine that will be dynamic so you don't have to worry about uh creating all these values yourself you just refer this value and anytime creates uh the whole title for you now this service i will select berlin factory and the email that i am using with this particular account so once i execute this node it will create an edit on page of duty it will also send me an email uh that there is there was an incident with uh with one of these machines

### Creating an incident [27:00]

so it's really easy to get started with this whole smart factory idea and build easy um yeah quite easily build it in within uh half an hour let's say to have your alerting system ready right exactly yes so uh let's now that we have this uh data coming and like we have this incident getting created let's let us go ahead and you know add it to uh create db so that we can use it uh for further analysis so for that again we are just gonna follow the steps that we followed earlier but this time we will set some different values i'm going to use the set node i'm again going to toggle this to keep only set this time i'm going to again select only the string values so we want the incident id and you can get this from the previous node that's the pagerduty node we want we also want the url of the incident so let's just say we come back to the incident you know we were doing some kind of analysis and we want to check what uh what when uh how the incident uh report was so we can just uh go to this link and we can check the whole report for the incident the next is the incident timestamp and this will be the created ad and that's it these are all the values that we want to set in this node and now i'm going to execute the node and it will uh get give me all the result for this node awesome so now we have set the information the next step is to add another creativity node i am using the credentials that were that i have i am using uh for the other db node now for this particular uh this particular ktp node i'm going to use another table that's incident data again i created this table uh just before the webinar so that you can see i already have a table named incident data and i am gonna get the column name specify all the columns that i want to add the data to and now i'm gonna execute this node so this will append all this information to microatp so we now have all uh we are now almost done with this workflow but there are a couple of things that i would like to show you the first thing is renaming the nodes so let's just say we come back to our workflow after a couple of days and we want to understand what is happening over here right so i see okay there are two ktp nodes and both of them are using the insert operation but i'm not sure what uh in which table are they inserting the uh data right so i can it's easy to you know just open the node and check it or better you can just rename the node just machine data now when i see my uh workflow i can see okay so this is uh ingesting the data into machine data and similarly i can rename this so uh i can again you know rename all these nodes uh to make it easy for me to get a visual cue another thing that you can do is use the no operation no operation do nothing node so as the name suggests this node does nothing but it helps you to get a visual cue of what is happening with your workflow so now that we have our workflow ready i am gonna save this workflow and activate it yes activate and save and

### Polling [31:33]

so while he comes back um maybe we can do another poll so like na10 has around 260 nodes right now with a couple of different services included and my question is how can you connect to services that don't have a node in it and yet and the polls should appear on your screens now okay we've got one answer to answer right so give you a bit of background on that so and it then can essentially connect to any service that has a rest api with the http request node and basically for any node that for any service that we don't have a node for yet we can connect with that and you can handle authentication as well so cure shall we do another poll sure but maybe before we do that uh what is your favorite note in it and i have a couple of favorite ones i mean i must say uh qradib is one of my favorites as well uh i wrote the note for createdb and uh i really like the edit image note because we have release announcements going out every week uh we are releasing quite often with new notes that we added and we had a graphic that you know in the earlier days we created manually and you know it was a beautiful graphic created by one of our colleagues had like nice isometric images but over time what happened was um it created an issue around you know we have to create a graphic every week which is a not very efficient thing to do so what we did was you know in the beginning we just did uh use something like banner beer where you can create templates add logos it did the job but you know it wasn't as exquisite as the handcrafted graphic that actually created so what we did was um we added some new functionalities and we learned how to do like a 2d to isometric image conversion with the edit image node and that allowed us to create these release graphics now the release graphics that you see on linkedin and twitter are all generated with the workflow sounds great sounds awesome there's more technology in the release graphics and other products maybe yeah yeah um yeah just maybe doing an uh another quick poll uh i talked about uh great db and then the uniqueness of creativity and um yeah we have a question also for that um so what makes credibility unique as a time series database okay perfect we already have answers flowing again yeah i think uh that's definitely one of the unique features of quite a b that uh it's scalability and it's distributed nature um there of course uh other databases using sql um but not many or none of them really scale that way that creates the scales for machine data like what is the scale like

### Scale [35:14]

it's like right now we are just using a workflow to ingest like some information but in real life with the cases that you have seen like what does the scale look like how are the sensors and yeah just curious about that yeah it's definitely a good question uh and i mean it's one thing to say where can scale do and to any size and the other thing is that we really do it i mean we see use cases especially in this iit area that start with uh yeah let's say with a few 10 gigabytes or 200 gigabytes of data interested over time but we also see customers using great db and with very large use cases um going up to hundreds of terabytes of data um saved in in a year and processed in a year meaning also inserts up to one million records per second and really records per second meaning multiple metrics so yeah that's really really uh yeah the way grade db is designed to work with this uh small use cases as well with this very large use cases yeah i mean in this case of course it's really it's only a little data but most projects start out a little and get bigger over time and that's why you really should think about using a database that can scale with your use case and yeah so maybe just to share to show you around a little bit um this is basically the admin ui where you see your current status of fuel cluster hasshil is using one of our clouds without databases using this trial offering that we currently have you can start and use great db for 14 days in a free trial with a small cluster i think it has something like six cores and again as i said before to you before create db is a distributed database meaning that also in this small trial we have not one database server running but we have three database servers uh running and being connected to uh together to form a cluster and if you look at this sharding table we see oops we see that this incident data table and this machine data table are also distributed across this cluster and not only are they distributed meaning that parts of this data on different servers but they also replicated that if one of the servers ever has a problem or you have to upgrade it you can still access all the data and use your machine as it was intended so hashid how's it going are you ready again yep i'm all let me just turn the screen us i hope you all can now see my screen

### Demo [38:28]

but let me quickly show you uh what the workers look like at the end so this was the simulate workflow data where we were creating uh simulating the data we are creating some random data using uh using the set node and some javascript code snippet and we are adding it to a queue in amqp now again uh for your use case you don't have to build this workflow since i don't have access to some sensors which kind of give me this information i have i am uh using this workflow to simulate the data the other workflow that we are more interested in is this particular workflow which is a listens for the data coming from a queue that is an amqp or it's an active mqqq uh it uh process the data for our use case we are converting it uh converting the temperature from celsius to fahrenheit we are setting the information that we want to add to createdb and then we are ingesting that to uh to create dp the next thing we are also doing is we are checking the condition if the temperature is greater than 50 or not if the condition is true we are creating an alert on project duty we are also setting this information for the incident that should be added to our created database and the other thing is if the condition is false we don't want to do anything with overflow right now there might be some uh cases where you want to connect to other services as well uh and it does not have a node for that and i see we already had that pool uh and i saw that a few of those responses were around merge node uh and a lot of work around http requires node so that's right you can use the http request node to connect uh to connect any kind of service uh to that actually if the service provides an api so let me quickly show you the http request node now with the http request node you can make different kind of http request you can make a get post request you can do a delete request basically all the kind of http requests right now based on your uh api you might uh have different kind of authentication method right your api might use some basic auth it might use oauth 2 it might you it might use header earth so you can select what kind of authentication your api uses you can select that authentication method create that authentication method set it up over here and then paste the url of your uh api and then set any kind of parameters that you want and you are good to go so you can now connect n10 to your service you can you add it to this particular workflow let's just say i want to uh make a request to a particular endpoint after the data gets ingested so i can simply connect this particular node to the http request node setup http request node executed and this is how we connect different services using http request note okay uh now the other scenario is what if we want to trigger a workflow uh based on the activities that are happening in different services like uh so for that you can use a trigger node that's called the webhook trigger node you can use in your workflows so this will trigger a workflow whenever an activity is happening in an external uh in an external service so let's say you want once you have created an incident on pagerduty you want to have another workflow that would uh send some messages to your messaging platform it can be slack it can be meta most right so you can set up the web book note uh to do that so you simply have to use uh select the http method for your uh while building the workflow you have to use a test uh url but when you build your workflow you can switch on to your production url so that your workflow runs automatically and then you can set up your workflow uh and you know send those messages to your messaging platform so this was all uh that i wanted to show around n and yeah let's see what's next

### Whats next [43:00]

so here uh now that we have a workflow which does which creates an incident which ingests the data to create db what all can we do with pretty been edited uh gear you are unmute uh yeah it's the sentence of the last year um we have seen that you have uh basically already 260 uh north uh in that area so you can connect shrink under systems um what uh yeah i think your imagination is basically the limit of what you can do um i think uh and it is a great workflow automation uh technology and then data technology yeah i mean i see we do see all kinds of use cases that would be relevant like um this example said this here is analytics where you really trigger machine learning workflows from your flow um up to cyber security with analyzing flows uh um yeah that's coming in and then maybe writing out um anomalies in that regard to be but um yeah what would be in terms of massive amounts of sleep and what's happening right now all over the world what would be your preferred use case so what would you like to do as a next project uh that's a really good question so since we already have something mentioning bitcoin ticker i want to build a workflow that you know would uh get some data uh some financial data from the stock market as well as you know from the cryptocurrency market uh do some kind of analysis to that so that you know uh once i have some good data over there i can make the correct investment and get rich yeah i mean you also already showed this connections you could use your http node to get the uh latest sticker uh and then feed it into create and run some analysis there on this data and maybe then make your buy or sell decisions based on top of that right yeah uh talking about uh some queries including dp uh do you want to uh show some queries how people can run their queries and you know talk about more sure i mean also in that regard you can use as i said before you can use postgres and if i maybe share my screen for a second we can switch back to um switch back to

### Running queries [45:51]

uh the database and uh we can directly in in on this admin here i also do some queries um like on the internet data i can um press this table uh query table and then i also already get the statement ready but i can put in whatever i like let's see let's do that and if we see we got all this information um start there i can use um stand standard sql in that regard i can use my rare conditions and say okay i want only the incidence with a certain time within a certain time range and run the screw right there oh i didn't find any uh i didn't have it maybe i have to run that yeah okay then i see i can do my fitness on that and i can basically use sending sql so i can also use uh char n's group buys whatever i know from sql server to my window functions in create db and um really um yeah get my sql knowledge into to practice and into action also on very large data sets right okay so that's great so i don't have to learn any other language my uh my knowledge from sql comes into you know place over here and i can use that knowledge to learn all sorts of queries yeah that's interesting awesome uh so one last question for you gaia where should people reach out to you and what should be the next step if they want to get started with createdb well i think the next step is to get to our website to create dot io and uh auto directly go to the console and start your free 14 days try and if you have any questions we have a community forum uh reach out to us or reach out to me at gear get great punk dot io awesome and if you want to try out anytime you can uh simply go to entertain. io and you can install anything on your own servers you can run anything locally using npm uh you can sign up for an item cloud uh to quickly get started using our hosted services and if you have any questions around anytime feel free to reach out to us on our community forum you can already see the link on your screen so that's it and thank you all for joining in and we are open for questions i already see a question by pedro

### Questions [48:40]

so the question is how to link tech on n8n and create b from a pc um so since we i answered it from a rate db standpoint but i think uh i can give the answer for ended in as well so since we're basically both uh quite agnostic in terms of what the data can look like you're basically free to use the text from your blc and also insert them in great db and we have seen this in industrial use cases where we use the tech uh really the tech name uh as a property in great db and you can um interact with them right yeah uh to add on to that uh and it and it also has docker images that you can run on raspberry pi and a lot of our folks from the community are running and it on n10 on raspberry pi as well as on arduino so if you have if you run into any kind of uh issues you know when you are trying out anything on your raspberry pi arduino you can you know reach out to us on our community forum uh and you can already a lot of folks have got dancers over there so you can maybe just browse and you know get your answer as well cool thank you both so i see a couple of other questions if you have some time um so i see another question around how does anything differ from na10 cloud that's a really good question so uh and it and uh basically is as i mentioned it's a fair code license tool so you can uh host it on your own server and but uh the only kvat with this is that you have to manage the servers right so you have to update and attend whenever a new version is released you have to keep an eye on you know uh if it's working or functioning correctly or not while editing. cloud is our own hosted server so you don't have to worry about uh managing the servers you just create an account and you can you know get started with building your workflows you can you don't have to worry about upgrading to the newer versions or worrying about the downtime as well cool next question i see is how to request for new integrations but i'm not a dev myself uh what would i do that's i'm assuming it's finite end okay uh yeah again you know the entertainment community is i don't know i love the community it's not that because i'm working with ntn but generally speaking because uh if you go to a community firm you will see a lot of feature requests so you can just go to the community form uh create a feature request and you know just share your particular use case uh and why you know like what exactly do you want so that we know like we understand what you need and then we will create that for you cool then i have a question uh around createdb which asks is create db being used in factories currently absolutely i mean factory is a definitely a use case with uvc and not only do we see really we also really see installations in the factory next to our cloud deployments so we have uh customers using great db uh right next to the plc and be integrated in that and use it in an edge environment as well as an uh a cloud solution for yeah aggregating this data from these different edge devices cool then i have another one here which reads which messaging platforms can n18 integrate with ah okay uh so there is a kind of a long list let me try to remember them uh there's again like metamorph slack then there is metrics if you are using discord you can uh connect anytime through the discord web hook and you can you know build an automation around that then there is telegram a lot of our users are using telegram and they have built those telegram bonds which uh which helps them you know automate their mundane tasks and yeah if you are if you use uh sms services a lot and if you want to send sms to your clients your customers you can use one edge or twilio for that and again you know you can use http request node to connect to any of the services that have a rest api cool um and just as a reminder if uh you have any more questions please free to uh enter it in the chat section on the right okay i see uh another one popping up it's for craigdb so how does dynamic schema work in create db yeah so a feature didn't mention yet uh there are dynamic schemas and it's really what what does it mean it means that you have your basic table schema that has previously showed you how to create a table in grady b but you don't have to have it fixed let's say you can mark this table as a dynamic table which means that if hasshi then decides to insert another data point uh you can just add this data bind and the table automatically adjusts to that so if you not only want to interest the temperature but maybe also false value uh you can just uh define it in it n uh and on the interest increase b this index is generated for this false value as well that's something that i didn't know thank you for letting me know awesome and uh i see a last one here around how are people using anything okay uh so there are there have been a lot of you know different use cases for nhn uh people are using anytime uh in marketing they are using it in shades as well there are people using it in uh devops and like again you know there are a lot of different ways and even people have you know you are using anything on chips which is quite extraordinary right using uh using services on ship is like something that i never thought of now we collect such inspiring stories from the community and uh sent post and then posted it on our blog so like if you have a unique use case of how you are using anytime uh do reach out to us and we'll like we would love to share your story all right so with that i'd like to thank you all for joining us today and in case we haven't answered your question we will get back to you by email in the next few days and thank you georg and harshal for presenting and everybody thank you so much for joining enjoy the rest of your day bye
