# Scaling Secure AI Automation with n8n Enterprise - n8n Builders Berlin

## Метаданные

- **Канал:** n8n
- **YouTube:** https://www.youtube.com/watch?v=V5wydNHTnVI
- **Дата:** 07.11.2025
- **Длительность:** 26:15
- **Просмотры:** 1,453
- **Источник:** https://ekstraktznaniy.ru/video/15204

## Описание

At n8n Builders Berlin, Erwin Buckers, Co-Founder and CEO of Chainels, a Netherlands-based startup, explained how his team uses n8n Enterprise to bring AI automation into their platform connecting landlords and tenants across Europe.

He shared how Chainels:

→ Self-hosts n8n Enterprise on AWS (Ireland) with Kubernetes for compliance and control
→ Supports clients running up to 1 million workflow executions per month
→ Builds custom nodes from their REST API for cleaner, reusable flows
→ Uses evals to test AI models like OpenAI, Mistral, and Gemini
→ Spends 10 % of dev time reducing tech debt and optimizing workflows

01:00 - Intro
03:00 - The Self-Driving Car / Lumen AI
05:22 - Compliance
10:05 - Infrastructure
20:15 - Questions

💡 A concise, real-world look at scaling secure AI automation with n8n Enterprise.

👤  Follow Erwin: https://www.linkedin.com/in/erwin-buckers-35699028/
🔗 Learn more about Chainels: https://getchainels.com/en/whychainels

#n8n #n8nBuildersBerlin #Automation #A

## Транскрипт

### Intro [1:00]

during the presentation as well. But my name is Aaron Bookers. I'm co-founder and CEO of Chainos uh B2B SAS company from the Netherlands. I have a background in computer science at the technical university from Delft and my company is now based in Rotterdam the Netherlands. So then you have a little bit of context today. A lot of topics to cover. So I will keep it short so that we hopefully have some discussion time but otherwise just let's continue the talk after my presentation but I will first explain a little bit the concept of the self-driving car. That's our metaphor of what we're building. We're not actually building a self-driving car but I have a video of it in my presentation. Then I will talk a bit about compliance because we are here in the enterprise track. a bit of infrastructure that will be the tough topic because I have really good guys back in Rotterdam but they are not here but I will tell you a bit and if you have really complex questions then I'm going to forward the question and we continue it somewhere else and [snorts] then a bit about the developer experience because I think that's extremely important as well and I think also touching the topic from the previous presentation. Last but not least I founded the company together with my co-founder Ser. We raised three million two years ago already and we have now a team of 50 people in Rodam. Thanks. So a bit more context of what we're doing. We are indeed a tenants app as we say or resident OS operating system and what we do is that we work for all type of buildings across Europe for big real estate companies because they want to modernize their service towards their end user the people living and working in the building. So that looks a little bit like this. We started building that 10 years ago straight from university and we are using NAN now for I think about close to one and a half two years to do a few things way faster in our platform and I will guide you through. So first of all before I going to talk a lot about NAN I will explain a little bit what we what I mean with the

### The Self-Driving Car / Lumen AI [3:00]

self-driving car. As you may know you actually have the self-driving car already in San Francisco. was there this summer. It's amazing, but it is also quite crazy when you're in because there is no driver and it's really driving itself. And that's I think what we are promising with our app as well. People are now controlling our app, managing things, but we know that we can do the majority of the admin work with AI. So, if we keep the comparison with the self-driving car, there is actually a sort of definition when a car is self-driving and it is level based. So step by step it will get smarter and smarter with the lane assist, cruise control and so on. And I think that's also how we envision enriching our platform with AI. So that is what we with our branding team rebranded to Lumini. The big companies have also cool names for their AI or their things and we built an AI wrapper uh called Lumini in our platform to make it smarter bit and then end. So we are an enter enterprise client I think for about a year now. Before that we were using the community version. So please don't ask the differences because I'm only using enterprise at the moment myself but I can at least explain what I like about it and I'm quite sure it's maybe not in the community model but what we do is that the app will be smarter. So things writing things we enrich all the features in our platform with AI thanks to Naden. So we built assistance, chat assistants, I have one quick video of it. We built all the agents. We can build that all with naden and then having our own interface. In the previous presentation, I think there was a good example of that. How you build with your own front end but it end in the back end how you can do that. Maybe a quick video of just one small feature in our platform. There is an assistant you ask to write a message schedule it for tomorrow and it will via the MCP node of N8N find a way to call all the endpoints to do this work that would be around 12 clicks otherwise and that's what we built and what we're experimenting currently we are in beta with this product so we have about 10,000 users using it but we have half million users in the platform but to ship this feature it was all about compliance actually because our clients are corporates and as explained

### Compliance [5:22]

earlier compliance is extremely important there and that's also where Nitn enterprise or the self-hosted version became extremely important for us so maybe to show you a bit so we have my company channels lumini is just the AI layer in the platform the company's called channels we have a trust center we have subprocessors we have contracts with hundreds of clients across Europe. If we change something, we need to update these agreements. So for us, NAN was amazing there because we can have the self-hosted version in our AWS cluster in Ireland and we don't have to add all the time new subprocessors, but it's quite of a hassle at least if you talk to my legal team. Then I think also enterprises and just an example of a workflow uh in our platform in our test environment is also that you if something goes wrong you need to explain why it went wrong. Are there any logs? Can we trace it? There are amazing features in as well to do that. So we even sync logs to our elastic cluster to have all the logs in the system so that we can back trace what is happening and that we can guarantee quality and if there are requests we can answer them that's especially for our devops team quite a big task and I think maybe one fun thing to show here because that's also why I make the snapshot like this with this isolated island firebase node I think with one of the latest updates of you have now the postgress data tables as well that was amazing because we sometimes had to step to go to Firebase and then part of our client base didn't have Firebase as subprocessor. So that was for example already nice because now you have it self-hosted still on AWS in our case. Another thing that is a huge power if you build with AI we I think most of us will not know what's the next step what is the best model. So we were already building we were actually copy pasting workflows as you explained before and then at some point luckily this thing got released so that we can have multiple models in our workflows because we have for example clients from Europe in France that are obviously using mistrol a lot but we have also some European assets from US-based companies that prefer most of the time open AI. So based on their preferences, we can actually offer the same AI functionality just with a different model behind it. Of course, based on the quality of the model, there is maybe a better accuracy and so on, but I think from a compliance point of view, it's an interesting one because most of the time the first question we ask is which LLM or which framework do you already have because that can speed up our process. So that sounds all super nice and oh we can roll out but that's unfortunately not the case. It still takes three to six months for us to deploy this functionality in the platform. Why? Because you still need to go through complian through the compliance steps. But the good thing of this setup is that you have limited subprocessors no LLM vendor lock in and you have the traceability. But on the other hand it takes long. Why? Because you see in general that it's an unclear area. So you see the legal teams of the other side figuring out which framework to apply. Then Europe is changing the rules. Then the US versus Europe is an interesting one as well. So you still have some troubles there. Um and I think also from a cyber security point of view, you really need to think about your prompts and how you can inject things in your application layer. So it brings some complexity, but it definitely speeds it up if you figure this out. And I think one thing what is a shortcut I think if you're doing the a startup you want to be innovative call it a beta or a pilot probably you can position it for a small period test first before you sign a big deal that was for us a fast way to launch the first functionality in our platform and then we go through the 3 to six months trajectory. All right let me go to infrastructure. So I think on compliance there are any questions feel free to ask them. We have seen a lot of challenges there. I think long story short most of the corporates are figuring out what is the best way as well. So I think just keep the conversation going. Do your job do your homework and then I think most people are extremely excited about AI capabilities infrastructure. I have a few slides about it. They tap in a bit in the

### Infrastructure [10:05]

previous talk about how you can structure it. We have a cluster in AWS with Kubernetes. Here are some details. But the most important thing is that we have a high availability production with multiple instances. We also have we call it product production and staging. We also don't touch production. We don't do hot fixes there. So there are definitely some similarities and we can easily scale it up. Actually, we can even spin up a new in instance if we want in 15 to 30 minutes because based on all the scripts that we have running. I think at the moment we have you see only I think two instances here but I think we have around five at the moment running in the company. So if we zoom in a bit more on the cluster then this is a bit how production looks like. So in our case is also connected with git and we have production what is high available multiple instances what's the main and then we [snorts] have a staging that's a little bit simpler setup it's more a cost thing and obviously there is less workflows and executions happening and then we have some other environments for the team internal and then when we actually we started with nan with just back office automations and then we went to AI and then we were breaking every time the production environment with AI things that were running forever. So then we also created our AI instance because we thought then at least the basics the fundamentals keep running by now we figured out how we can avoid that. But I would definitely recommend to make multiple instances and as explained before you can easily sync these things. There are definitely some challenges. I think what my team hates the most is I think copy pasting credentials from demo to production. Of course, you have ways we tried out a lot of things how to share keys and things but it I think it will improve over time. If there is somebody here that has a great tip for me at the end, please share it with me because this is definitely something where you make a lot of people back in the Netherlands happy with. Yeah, maybe a few numbers that I got today. We actually doubled last week, but the production is now running on two CPUs, 4GB memory. But yeah, it depends a little bit on what we're doing. It's maybe good to mention that a lot of things that we're doing not are not web hook based. A lot of things are also happening overnight are scheduled jobs that we can easily spread out over the day or the week. — What's your — Sorry, — what's your execution per day? That's a good question because we have actually now the insight we had the insights disabled on N10 because it was not good for our performance. We just turned it on a week ago to give you an example at least one of our clients is for example do 1 million executions every month for one workflow. So I think we're talking about workflow executions I guess 10 million or something at the moment a month but it really depends because we have also a lot of legacy software that we connect and then we do the schedule jobs so just every minute pulling so creating our own web hook on things so we're optimizing now but yeah let's put that way the most the million is highest that we have our AI feature at the moment is doing I think in total for the 10,000 users around a million executions a month as well. So that's a bit the size but the thing what I'm showing here not all the executions are on these instances the AI is a bit separated from that [snorts] it's a new — sorry — it's a size — yeah so what our devops team is doing that indeed for example when we started the AI project in January this year then we created technically this environment immediately for the AI instances as well and these are our back office automations and our AI instances have slightly different setup but in essence the same thing and we also and that is I think my next topic we try to implement all the deaf experience like rules and things how you work we try to perceive it as coding like how we're doing it has also to do with our background we are most of us are developers or were developers and doing this but maybe I can tell you a bit more in the next slides let me see the time all right that's good okay so here we So developer experience maybe a little bit about the O3. — Go ahead. Yeah. — I'm curious about what was the mode of any that gave you because I feel like you could have built this in AWS like yourself even. Is it like the speed to production for example speed? — Does it have tech dep Yeah. So maybe to repeat the question a bit. Yeah. The last thing you mentioned was tech depth and why nitn and not building it yourself. It's a really good question. It's maybe good to mention that we have a big development team that is let's say working in the let's say the old way or normal way of coding. That's our iOS team, our Android team, our web team and so on. So we have five teams [snorts] and two teams are doing n things in our company. So maybe good to explain we are we're coming from Postman enterprise in combination with some Zapier two years ago and then we were looking for the compliance part and we figured out N10 could be a thing before it was cool and then we thought let's give it a try the comm community instance. So that's how we started. reason why we're not building everything in our back end ourself is that if we see the cost for that it's higher because it takes us more time to build and especially if we want to iterate on it. So maybe to compare if I'm not sure what your background is but let's if we look at normal cycles with sprint meetings weeklies planning work then our POS our product owners are planning everything for the developers with requirements and so on. We do the same for the NA10 things. The only big difference is that we sometimes in the meeting fix it immediately. Oh let's connect it. Let's just prototype a bit because it's also visually I think really appealing to work on it and then actually we always had the idea. Okay. And then once we will build everything again in our back end till the moment we realize that if we can do 1 million executions and we still don't have any problem we have only half a million users on our platform and if we can run that easily why would we shift because every time when there is a new LLM node we just plug it in takes maybe 5 15 minutes we can test it already so for now it's also the speed of developing but I think the tech depth the last sort of question that you had is a really fair one because I have no good answer to it only that we have a big problem in terms of that we have hundreds of workflows when we open it that we think this could be done with half the nodes now 6 months later are we going to invest in that clients are not asking I think it's the traditional loop of tech depth the only problem is that it feels that the loop goes way faster with the normal teams we're talking about code from three to five years ago now we're talking about workflows of two months ago so what we're figuring out how we deal with tech depth at the moment is fairly simple. We spend 10% of our time so resolving tech depth optimizing flows and how we do it maybe to bridge to this slide is this is the left side our application we have obviously an API and we connect the rest API with N10 all right first of all we have quite an extensive REST API so you all know the I guess if you're all using NAN then you know the HTTP node that's probably your friend to a certain extent because if you have too many HTP nodes it's unreadable your flow you can put sticky notes everywhere and so on. So what we start doing is that at least our own endpoints are the channels nodes. You see them on the right. So we have yeah multiple nodes or let's say 145 options and all the documentation is in. So that works really well. So we generated our custom nodes with API specs works really good and makes it visually also really appealing when you look at your workflow. What is our part of the chain and what isn't third party. So I would really recommend if you have any developers in your team look at it. There are some really good libraries. I know that the N10 team is also working on some options I believe to make it even easier. We even made our own notes for third parties because we simply liked it visually more and we thought it saves problems. One other benefit of this is that here on the left side I do exactly the same as on the right side but with the HP node the real things are not even in my viewport. So that is I think also a huge advantage besides the fact of switching all the time looking at my documentation and all the tips and tricks are automatically in. So I would really recommend from a developers point a few try to invest a bit of time in the notes. Then I'm already I think at my last slide and then I think especially the evolves new are quite good or getting better and are of course with my developers background I love it because I want to test things I want to know how stable it is. So what we do is that we will build with all our flows tests also to benchmark different models and even I had a discussion with the developers a few months ago. Okay, but how are we going to test something where we don't really know what the outcome should be? There are luckily with the latest updates in EVA also a lot of opportunities there that you just you create some obviously false answers, some obviously true answers and some things in the gray area and then you test it. Yeah, it's really compared to unit testing and so on. But I think it really helps as well with with debugging. So these are let's say the things that we mainly do. Yeah, this already my last slide and maybe good to mention is that what we are doing is we're using a lot of AI now in our platform but we also don't know which model we want to use for what purpose. So we also do a lot of workshop with our clients to figure out where the pain point is and then we prototype it immediately in NAN. We sometimes even show it how it is going to work and yeah I think that is I would say the strength of a low code platform like this. Thank

### Questions [20:15]

you. Are there any questions? — Yeah I remember a slide where you showed that you are using multiple models for different cases — and from my experience when you're using multiple models with the same problem. — Yeah. — Okay. So I will repeat it again. You showed a slide with multiple models and for some reason you need to run different ones and from my experience if you are using the same prompt when running multiple models the quality of the outcome varies wildly. So my first question did you notice the same thing and second did you fight it somehow for example with custom prompt repository or something which is adjusted for each model. — Yeah. So I think you gave already a little bit the answer I would say but we have a few models that we test them. We do them with EAS. We check the quality and we do know which model is our preferred one from two point of views our costs and the other one on how precise the answer is but we measure it and it also changes because of course one of these providers have a new model and we can tap it in. So what we look at that's maybe good in the let's say the spreadsheet that is created behind this with the evolves as you may know we look at the obviously the seconds that it takes to answer we look at how many credits tokens are burned and then we have at the moment just I think more an organizational culture thing that we said okay we put sticky notes there and we put things which models not to use so that you don't downscale to a mini or a nano version because you know it can't answer it — but you're still using the single prompt for multiple models. — Yeah. We are not we try. Okay. There are a few examples where for example Gemini was not working and then we just say okay we don't use Gemini for this node. So if a client really wants to have gem gemini then we need to change our flow but at the moment we can push that away because I think at the end if you use sub workflows you can make it you can break it up in nice components like if you're coding with functions and then the good thing is that most of our agents and assistants exist of let's say a main workflow that executes four or five workflow subworkflows and most of the time we only have to touch where the LLMs are and all the other things we keep exact exactly the same at the moment. But it is a fair question. I think if we run more data, we will definitely have our preferred picks. Yeah, — thanks so much. — We've got time for a few more questions. I'll just come to you first and we'll come to the back afterwards. — Yes, thank you. Do you got any test unit tests or end to end test for your most important flows? — Yes. Yeah, good question. Our most important flows because we are real estate software is that overnight there are new people start living and working in a building the next day and they get automatically an account. We have automated onboarding through NAN. So that is the let's say the biggest thing that can go wrong that you get the wrong door keys and these type of things. So what we do there is that we definitely have a lot of tests. So first of all our whole platform our own API is of course fully tested end to end. So we know that because the API is used by third parties, by our own app and so on. So that one is I would say fairly stable. Then we have the EVOS on the N8 side. Then we have monitoring via our Slack. So we use I think was a question in the previous session. We have some Slack channels where we have the monitoring as well. And what we do is every time when we execute something really important, we put the human in the loop again or at least inform the human the client. So what we do is that we for example send a summarized email or there's a status page in our platform where you see the summary of the changes that happened. So these are the things that we're doing. Luckily we of course have also monitoring on our AWS clusters and so on. So that's how we do it now. But I think before EVOS we were a bit swimming I would say like how are we going to do it now with EVA and the latest updates we're quite happy on this scale how we can test it. Yeah. Hello. You mentioned that you have two separate instances running, one for normal production and one for AI. How do you differentiate which workflow belongs in what? Is it simply one has an AI node, therefore it goes there or do you look at token usage like throughput, how much megabyte is being processed per instance? — That's a yeah fair question because at the end maybe flows will have AI components for now it is everything what we call lumini. what are the AI capabilities in our platform but have the purple bluish icon that is in that instance and all the other things are in the rest and to be honest the rest is not using AI at the moment we do a lot of matching just with I think our team really loves the data set comparisons and nodes and these things so we try we only use AI when it is our last resort because we know that there are quite often other operations so for now it is indeed exactly so you say we don't use any LLM nodes in the another version. But to be fair, I think with having your [snorts] backups in git and so on, we have a dedicated devops. So we can easily, let's say, flip it around. But it's eventually I think it's more defining how you want to collaborate and do it. And I think that's now also a job for our PO to define the workflows and so on. And that's also a bit tying into the tech depth question that it's going so fast that we're also sometimes thinking okay this is this the time to change it or should we just keep this going on till the end of the year and then there's a next version to do it. — I think unfortunately we're at time we've got a short break and then we've got the round table the panel session taking place in the main area. So I'd like to say a massive thank you and present you also with a small TV which we will shortly get your talk loaded up on. Thank you everyone.
