‘Scaling Automation with Infra and AI Agents’ - from the Amsterdam Meetup (April 2025)
21:03

‘Scaling Automation with Infra and AI Agents’ - from the Amsterdam Meetup (April 2025)

n8n 24.04.2025 4 132 просмотров 61 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
How do you use n8n and AI agents at scale in an enterprise environment? Chainels.com CEO Erwin Buckers shares how they power their large-scale property management system with an n8n backend, and what their AI architecture looks like. Keep an eye on our community calendar for upcoming events around the world: https://lu.ma/n8n-events Interested in hosting a community event in your area? Join our Ambassador program: https://n8n.io/ambassadors #n8n #community #ai #agents #lowcode #nocode #amsterdam

Оглавление (5 сегментов)

Segment 1 (00:00 - 05:00)

Nice to be here guys. Uh my name is Aaron Bookers. I'm CEO and co-founder of channels. Uh and I will tell you a little bit how we use N8N now the last year last 6 months for the series and how we serve our enterprise clients with it. So maybe uh you had a nice introduction from Max how you could be a freelancer and in the creative but you can also serve corporates uh in a maybe in a little bit bigger setting. Um, I founded the company channels 12 years ago. I have a background in computer science. Uh, I can't code that much anymore with my role, but I do play a lot with N810 and with my colleagues uh, building flows. But just before we jump in, uh, about infra and AI agents, uh, I will show you 30 seconds of what we're doing because we provide a tenant app. So we work for real estate and we provide an app for the people living and working in the building to have a nice experience. So this is a bit what our tool is doing. See a quick video. So we have an iOS and Android app uh and a web platform that people that live in buildings in Amsterdam, in London, Paris install and they connect with everything in the building. So smart locks, neighbors, community events and so on. And we do that for really big buildings across Europe. We're active in 20 countries. We have a team of 50 people in Rotterdam. Um, so we we're growing quite fast, but they have also a lot of smart technology in the building. Uh, and we need to connect that. So we made a lot of manual integrations the last years and switched everything over to N8 Nprise. So I'm here with a few colleagues today and we made a squad beginning of the year uh because I thought okay how are we going to start with this because I don't want to hire immediately people for it and let's see where we can go. So we selected a few uh from our tech team. Uh I think five of them are here today actually. So Alonzo our most senior solution engineer Marco fullstack developer he joined the team to do some really cool stuff that I can show you today. We have Andre, our lead designer. He's not here today. Also, computer science background. Castle is in the audience. He did the full infra. So, if you have really technical questions, then I will point to Caslay. Um and um we have S and myself uh building some flows as well. So, two topics today. First about infrastructure. Um so, how we set up N. Um I tried to scale it a little bit up because I didn't know that the screen was so small and the audience is so big. But um what do you see here is our infrastructure that we are running on Amazon AWS. Um so what do you see here is that we have um a few things. We have a public subnet and we have a private subnet. Uh within that we have the elastic kubernetus services that is amazing by the way. Um and then we have a nut gateway so that we have like one IP externally what makes it also extremely easy to switch workflows from environments. Um we have everything self-hosted is for us extremely flexible because we're working for some big buildings that have onremise services that we can connect now uh via the VPN. So we run like some really serious legacy software and we make it look cool again thanks to this. Um and then I think I can talk really long about it but I think if you zoom in I think that's maybe even more interesting. Oh by the way it is super cool that it also heals itself. So sometimes something crash whatever happens. Most of the time it's even the tech of our clients not our tech not but just the low tech on the other end. Um but then everything uh heals again because we have redundant redundancy in um so that is highly available. So I think that JP was saying that we need the queueing I think um so would be great on the road map. Um if then zoom in. So if I zoom in on this part here the uh the Kubernetes services um then you see how we set it up. So we have a production and we have a staging. Um yeah it's a little bit small. I have an maybe I can better show the next one. I zoomed it in for you. Uh so we have a main and we have a staging uh git repository behind it. We have workers and we can scale it up really easy. And then we have internally in our office in Rotterdam, we have some community uh versions uh running just for fun and uh experimenting with AI and so on because soon we will also bring all our cool AI features to our clients. But because we have so many enterprise clients, they are also a little bit nervous with AI. So we need to make sure it's extremely safe and the cool thing is that we can of course use all the rappers and the European versions as well. Um so there's I think a really quick overview of our infrastructure. Um yeah there are some numbers in there they are maybe a bit small but when we are going

Segment 2 (05:00 - 10:00)

to run our AI um for our clients we need to scale up a lot because um yeah we were serving uh in 20 countries 700 buildings half a million users uh so uh when they are going to use AI for real we have now just a few using it then we will scale up everything but uh with this setup it is uh to be honest really easy I asked Castle we were going by train and said like how long does it take to set the new thing up? He said, well, I can do it in 50 minutes and the rest goes automatically. So, we can do that in a few minutes. Um, are there any questions about infrastructure? Max. Yeah. A quick question would be um with the kind of production use cases that you want to use AI for, what kind of model size are we talking about? SLMs, LMS, what like for your real production use cases, what models are you going to be using? I can show you that in a bit. I think I have that in the next slide. That was not prepared. Go ahead. Elastic in in cloud suppliers because you are talking now about AWS but uh in Ireland most of the companies are more Microsoft minded. Yeah. And still we have a very small Google Yeah. indeed. Driven. Um how is how is the elastic? Yeah. Yeah. I get your question. So maybe to repeat it. Yeah. Microsoft versus Amazon, Google. Indeed. We see also like if I with other companies that Google is not that popular. They try to to gain some footprint with some nice discounts. But we are on Amazon AWS for a long time. We actually moved everything two years ago. Of course, there are some things going on in the world that uh that are not that good for that. But um for the rest, we see that all the corporates are okay with it. It's Microsoft or Amazon and they have all their certificates. We are certified. So, we don't really see that issue. The only funny thing is that when we're talking to our client real estate in the IT, they think that because it's Microsoft that everything connects easier because it's all Microsoft products. But is that answering your question? Yeah. But how elastic is it? One of my biggest client is big construction company here in Holland who is working with the government as well and therefore they choose for now for this year it's still Microsoft minded all right but next year it can be somebody else as we look at the politics yeah I can imagine at the moment how elastic is it that uh Kubernetes uh infrastructure that you have uh is able to be hosted inside of in other cloud supply question is more or less that you have and you have basically a f which configures how it works. So basically it's all a uh in Git and to move it over if you want to run Kubernetes on another cloud because it doesn't matter which clouds you run. If you're on AWS, on Azure, on Google, it's all the same thing. You can basically do uh do the same things on uh all clouds but there are different services to support it. For example, AKS or VO is Kubernetes version is not technically 100% the same uh but it's one way to do it. Uh so it basically you have configuration files you can just apply them to for example to your Google cloud where you to your Azure it doesn't matter but it's basically configuration of how your ports are set up. So if you want more bases or more workers you just change zero to one to two or whatever and then you scale up horizontally to scale virtually you can also say hey now we give more resources to the pot that's bit how you scale and you can move it quite easily. The only thing that's hard to move is data. But the data in general does not run inside a kubernetes cluster that runs outside is easily scalable for that reason. And also we run a for so we don't manage uh our resource pools. We don't manage few machines. We just said okay we want and it scales up automatically. We don't manage resource. We just manage how many we want. Thank you very much for your answer. So that means that um Maybe we take offline. Yeah, let's do it afterwards. That's also why I have Clay here for your All right. But I think as a summary, we can easily switch cloud. We're by the way working for almost all big cities in the Netherlands and the government signs of our contract. They're also completely fine with what we're doing. Um all

Segment 3 (10:00 - 15:00)

right, to quickly switch to the last topic, uh AI agents. Um but I knew already that there were some things announced today with releasing on the N10. So I thought I'm not going to explain today how you build agents and how you do these things because I think many of you are maybe even further with that than us. Uh but I just want to show a little bit with our infrastructure what we are trying to do and what we are building on top of our platform. Um but I think maybe to show first is that AI is super cool. We love it as well. But there are a lot of things where you don't need AI for. So that's one thing that we see a lot in the community. Use AI for everything. Well, try to run it at scale. It has a price. Um, uh, it has some cost. Um, so we do a lot of legacy systems. Here you see a simple flow where we map data from SFTPs and so on. You can do it with AI. Please don't do it. It's expensive. We run it so many times. So first thing try to fix it first in a different way. But if you need AI, you can do really cool things. So that's why we will launch uh in Q2 Lumin AI. that will be our AI in our platform that our clients could um uh could upgrade to. So what we do is that we connected all our features and all modules to Lumin AI what is all NAN. So we were waiting for this update uh earlier today uh to connect everything in a in an elegant way. Um but that will be everywhere. There will be like people writing a lot of content, finding things um uh asking things to the knowledge base and so on. Um and I have a few examples uh to close off today. Um so what we're actually trying to build is we are building the self-driving car for real estate. So of course and that's maybe also an approach how we do it. So we selected a few people in our company that are building the self-driving car but you may have driven in a car and you know that they are not self-driving in the Netherlands uh at least um because we need to take some steps. So that's also how we are going to do it and I recommend everyone building it. So you know we in the beginning we had to do everything oursel now we have driver assistance and all the automations to get to level five and that's also how we define our product like how can we go to the self-driving car uh maybe within 18 months 12 months from now um this is not our model I always put source uh in the bottom uh so this is coming from another company in the Netherlands um so a little bit about our platform so we have on the left our platform web and app uh we have a rest API with Oout 2 2. 0 and then we have NA10. So what we did what Marco actually did one of our developers um is when we start building an NA10 we thought okay I think everyone here loves the HTTP note and use it for everything. Well if you use it in many workflows and super workflows it's super hard to read your flows. Of course, you can use the sticky notes, the notes, they are amazing, but I think you prefer to write really clear di flowgrams uh that you don't need to document too much. Um, so what we did is we had already an opinion API. We uh translated that uh we could built our own custom nodes. So we have 145 uh actions at the moment in our system. We gave it our nice logo. So we immediately see when it interacts with our system. Um, also thanks to one of the developers in the community because we used it's in the bottom deaf like a pro. He published something. We're in contact with him uh to maybe make it even better. We use that um in essence and we are also trying to help to get it better now. Um so if you have open API spec specifics look at this import it you will have your own notes. Um it works uh really good. Um, so to give you an example, just a simple one. Um, so what I'm going to show you is, um, a lot of people post messages on the community, but they are really afraid that people share things that you don't want to see in your building, uh, that they are hating each other or whatever. They're cursing and these things. So, we have house rules in our community, but we can easily uh, automate that. So, what we did just a simple flow. You see the our notes already. It's a super simple one. There's my iPhone. I recorded in the train by the way. Um and uh I'm just writing something like I hate you all guys um in the community send it to thousand people living in my building. H it's nice. Um and then what we do is that uh we have our watchdog in our community uh that immediately when it's posted uh catches it and doesn't delete it immediately because that's we think not the right behavior but it's just teaching someone like hey don't do that again. And you see Ava, that's our bot. Uh she replies immediately, hey, this is not okay. Three strikes, you're out. And then we bang you from the system. So um this is a super simple one, simplified one, but you see that the flow is super readable. Uh and I think this material will be uh shared soon as well. Last example. Uh also thanks to Cole Medan, I

Segment 4 (15:00 - 20:00)

think from the community from YouTube. He explained I think really well how a gentic rack works. So when we started we we downloaded his tutorial or we downloaded his template. We played around with it and then we thought okay what can we do to move it to our uh system. So what we did is we use stream chat. So our developers built their own nodes on stream chat because was not available yet soon we can publish it for the community as well. So when people are chatting in our community um they can also ask things to AI and it goes by the factor database. We have also a local one now but this is just a simplified one uh to give answers on the knowledge base and again it's not how you design rack or whatever there are so many good videos about it it's a bit more about the flow how you can uh make it readable and understandable for everyone and of course you can easily uh replace everything uh to your needs um so this is another example that we are we're launching yeah med is the is the builder of this not this one but I think it's always nice I love the community so I think we were heavily inspired by him so I think actually now we used just for this presentation um his example at least this part of it um but the other things are our services uh that we connected yeah and does he for the low part has he an affiliation still or um in this one is not even live this is just a concept but that will be definitely something We're in contact. Yeah. All right. Um maybe to close off um Bart asked me also some tips. So I thought after a few months we have five tips not only for me but also for my team members. Um so first of all I already said it AI is powerful but often overkill. Uh Alonzo one of our solution engineers says it also can be way faster if you don't need to use it. Uh so keep that in mind when you build. Um I think from Caslay well he already answered a really technical question but um you can add libraries uh community nodes by building your own docker image with the net at the end base um that is helping us a lot um then I think because we have a lot of let's say developers not using low code platforms so we try to introduce the good habits that we have there also in this process especially if you try to build something big um so we review workflows we commit often we clean up after afterwards Arch we tried to give it good naming and good documentation especially because we are also hiring more people work to work on this uh this big project. Um then I think what I faced myself when you're stuck I think sometimes you especially a few months ago first update uh because that helps a lot. Um and then ask the community because it saves you a lot of time. Uh and last but not least, um if you have a really good ID, probably somebody else had a ID close to that and it can uh really uh maybe help you a little bit with the cold start problem of uh of building the workflow. I think we're running out a bit of time, right? So I don't know if there's some uh one or two questions. Are you Oh, okay. I think Kesley, do you want to answer it again? is quite new. So however at the moment is maybe yeah maybe to repeat a bit you ask about performance and how how we measure it. So we do that also we have our systems already in place for our platform. So we're now trying to move that to for the NA10 uh clusters and so on as well. Um of course like many of you we use Slack and all these things. We have also really nice monitoring there. We see it in our office in Rotterdam as well when things are off. Um the good thing is a lot of our legacy work well workflows with old systems are running overnight. So we get reports in the morning early in the morning and we also see what uh what happened and we don't have to do that real time. Um yeah. So one last question. Yeah. Something totally different because it's not really a technical question but how does this work with the licensing of any with all the different instances? I don't know if you have cluster or installations and how does it work? Yeah. Well, yeah, I was in the negotiation a while ago, but the good thing is Yeah. I can imagine. Yeah. We knew that we had to close a deal soon because we already saw this coming. So I'm not going to tell about our deal. But the um the thing is that um we are paying for the enterprise and we're paying for the workflows and we got some advice from NHN also uh because we were on the community model. Then we made for everything a workflow and a trigger and

Segment 5 (20:00 - 21:00)

an event and now we structured a little bit better to also uh what we call in the company Finnops instead of DevOps uh to optimize that a little bit. But uh yeah, whenever necessary, we uh we buy new uh enterprise workflows. Yeah. I have a question and I'm a sucker for numbers. So operation on your size. How many workflow executions do you run a day? I asked it and he said I am not 100% sure how we can see it at the moment. So uh we we maybe can give that answer later on in the community and you can because we are building that. Ah maybe we can do a quick u quick uh youo you you can make the calculation and we can uh talk about that in the pizza. Yeah. All right. Thanks man. This was really interesting. Thanks for coming over.

Другие видео автора — n8n

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник