Want to make money and save time with AI? Join here: https://www.skool.com/ai-profit-lab-7462/about
Get a FREE AI Course + Community + 1,000 AI Agents + video notes + links to the tools 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553/about
In this livestream, I cover the new OpenClaw update and show you how to set up Agent Zero completely free using Ollama on your local machine. No API costs, no subscriptions — just powerful AI agents running locally and doing the work for you. If you want to run both OpenClaw and Agent Zero without paying a cent, this is the stream for you.
Оглавление (7 сегментов)
Segment 1 (00:00 - 05:00)
OpenClaw just dropped a massive new update and you need to see what it can do now. They just added Mistra AI, one of the most powerful AI models on the planet. And you can now use it for chat, memory, and voice. But that's not even the biggest part. So, OpenClaw can now remember things in five different languages: Spanish, Portuguese, Japanese, Korean, and Arabic. So, if English isn't your first language, OpenClaw just became a whole lot more useful to you. They also fixed the browser extension because let's be honest, it kept dropping the connection and it was driving everyone a bit crazy. That's done. Now, there's a brand new auto updater so you never have to manually hunt down the latest version again. Chron jobs basically schedule tasks. Your AI runs automatically can now run at the same time instead of waiting in line. And they pushed over 40 security fixes making this one of the biggest open claw updates they've ever shipped. This is a big one. And today we're going through each of the updates so you know exactly what's changed, what it means for you, and how to actually use it. Let's get into it. So you can see the update announced here. This is just announced like a few hours ago as you can see. Um, and they've announced Mistral with chat, memory and voice, multilingual memory, builtin auto updators, chron so parallel run, so you can run like parallel um, schedule tasks at the same time, some more security fixes, and a browser extension that actually stays connected. Now, if you click on this, you can actually see the change log. So, you can see it's a very chunky one with a lot of different updates going on here, which is awesome. They seem to be updating it all the time, which is great, too. Um, Dwayne is here. Welcome, Dwayne, sir. And you can see the updates as we go along here. Right. So, if you want to update this, just make sure number one, you've got this installed. We've got loads of training inside the eye profit bing if you want to learn how to do that. And then you can see here inside open claw we've got this set up. So the way that you can enable auto updates is you go to your config like so. Um then you can see there's this auto updates update section here. Right now you can switch between um auto updates for example for stable beta and dev. Um you can select update check on start as well. Um and then once you've done that you can just click update and hit save like so. Um, so when you start this up, it'll go ahead. Um, and you see how we've now got this button at the top that says update now. We can click on that and then just update to the newest version. Sometimes it breaks. Who knows what's going to happen, but we'll try it out. Um, and I test this stuff for you, you know. So, that's how we go here. Uh, who we got? So, we got Sahan says he's a new subscriber. Welcome here, sir. Thank you very much for subscribing. Um Jackie says, "Should I buy a Mac Mini or should I best use VPS for using the browser and making um automation? Mac Mini would better or okay. " Right. So you do you don't need a Mac mini. VPS. Right. Um I actually wouldn't recommend a VPS simply because you know people can um get access to the gateway and then they can access your open call is not very good. If you do want to use a VPS, if you're insistent on using a VPS, then I would recommend using something like Molt Worker, which is a bit more secure. Um, it's actually run by Cloudflare, so that seems to work a bit better. Okay. Um, if you just set this up inside your terminal, you don't need a super powerful computer to run open claw. Set it up inside your terminal. Um, connect it to Telegram and then it's just running in the cloud as an API. Um, that might be the best option for you. You don't need to get a Mac Mini. In fact, I think there's actually a shortage of Mac minis and Max Studios right now. So, you don't want to be waiting to set this up. Um, so yeah, we're going to keep going straight into this. And you can see it's now updated to the new version. I'm just going to ask it now like what version of Open Core are you using? Let's see if that works. And I'm using chat GPT codeex off um to set up openclaw. I have used it with uh lama as well that I will say when I was trying to update it on that didn't work at all. Right. So I had to switch back to chat GPT codeex or to get this working. Um and I think this is yeah this is fully updated here. You can see it says openclaw version 2026. um 22nd of February, we've got that version running right here. So, we are living the dream. That was a nice smooth update. I like that a lot. Off cam says, "Can I use it in my old laptop? " You can indeed. Um again, like literally all this is like a few ter it's like a few terminal commands and then it's running in the cloud once you set that up with a cloud API, right? Uh Karen says, "Has codecs become available on Windows now? " Not as far as I'm aware, although um yeah, no, I don't think it's on uh Windows at all, but I do think Claude Co-work is available on Windows. So maybe you want to switch that or you can use you could use a VPS to access um like a Mac virtual private server as well. That could be an option if you want to get it installed. But now we have OpenClaw ready to go with the
Segment 2 (05:00 - 10:00)
new version. Uh if you want to run Mistral on this there's a couple of options like you could use open router or if you wanted to use a new version uh what you can actually do is just run through the on boarding right so you see once you've got this installed you can go to you can type in this in the terminal and switch your API anytime right so you just say openclaw on board um and then install demon right and basically once you've done that you can run through the API setup for mistral and mistral will allow you to use voice chat and uh embeddings as well like you can see right here. Let's see what else is meaningful inside the chat as well. So, we've got a CLI update. Um so, you can use this to preview um the new updates. As you can see, you've got some new updates inside config and that sort of thing. Discord allow list, which is pretty useful if you're using Discord for this. Um but yeah, there's all sorts of updates here. like it's literally a never- ending uh list. I actually created a breakdown inside the app just you can see it right here. Uh so if we scroll down here, let's have a look at the summary and we'll just summarize. Okay, what changed here? So first of all, if you're wondering, okay, what is Open Claw? OpenClaw is a tool that lets you connect AI assistants like Claude for example to apps you already use like Slack, Telegram, Discord, and more. It's kind of like a bridge that lets you talk to your favorite apps and do things automatically for you. What just got updated? Well, we got the 22nd of February updated and here's what's changed in plain English, right? So, new stuff was added. A brand new AI provider called Mistral is now supported. That means you can use Mistral's AI brain instead of or alongside other APIs. It also supports memory and voice. Now, you also got Synology Chat, which is now supported. There's a new app you can connect your AI to. Um, this is like a messaging app some businesses use on [snorts] private servers. You can also tell the tool to auto update itself now without you having to do anything. It's turned off by default so it doesn't surprise you. There's a new command, the update one that I mentioned that lets you preview what an app update would do before you it actually does run like a test run, right? So, if you go back to the list of changes over here, um you can see that you can preview the updates before you actually change anything, which is pretty useful to be fair. Sometimes it breaks on the new updates or it changes something. The AI now better remembers conversations in Spanish, Portuguese, Japanese, Korean, and Arabic. uh before sometimes they forgot some important words and that sort of thing. And then also there's a big update and an overhaul inside uh chron jobs which is like the scheduled jobs, right? So chron is like a timer that tells your AI to do things automatically at certain times like every morning at 8 a. m. send me the latest AI automation news. A bunch of bugs were fixed as well, right? Um so for example like bugs with Slack, Telegram, the web dashboard, background tasks. I will say that was one of the smoothest updates I've done. Sometimes it breaks. Um yeah so that was pretty useful I think to be fair and then if we have a look in chron jobs what's changed here so it says um uh so you can add a chron job over here which is a schedule task manually if you want to as well on this se section and then you've got theuler over here right so you can check what jobs are scheduled what's happening etc when it's going to happen um and you can see for example I've created a chrome job for the daily AI news um digest and also you can see your run history inside the chrome jobs as well. I can't see a huge update to that, but I think maybe this has changed, but honestly, it's been so long since I checked it. Um, I'm not sure, right? Um, Karen says, "Tell me your complete AI stack. " So, we actually got a list of tools that I use and recommend inside the profitable boardroom. If you want to get the 8020 of that, Claude, like Claude is what I use for most of my day-to-day work tasks. And then, what else have we got here? Um Dwayne says, "I get confused when you say Oorth. If so, how would you integrate this? Um is it something I can find in a boardroom? " Yeah. So, if you go inside the boardroom here, um if you want to use oorth in the setup and see how I do it, then you can check out the full sixhour course that we've got right here. Um but basically, OAF means that you can log in with one click, right? And you probably already know that, but if you don't, um yeah, it means you can log in with one click using Chat GPT, for example. And so that allows you to just select whichever type of chat GPT you want. Also means that if you're on like the pro or the plus pan, you already get um a lot of tokens you can use with open claw without having to use the API which means that you save costs, right? Which is great. So you can get all the training and the full setups right here. We have tons and tons of training on openclaw. You can see for example um we got another three-hour course here inside the arring link in the comments description or go to arring. com. And then you can see for example we have updates on like ironclaw, we've got open claw versus ironclaw, we've got um agent zero which is an alternative to openclaw as well. Um every time there's a new open claw update, I usually cover it and just share it. So yeah, if you like training on openclaw, you're going to like that directly. All right, so that is basically it. I've run through all
Segment 3 (10:00 - 15:00)
the updates, how it works, etc. We do have a 30-day plan for mastering this as well. Um, as you can see right here, uh, this is inside my AR automation community and this comes with 2,500 members. There's 73 people online right now. There's always people online, which is awesome because it means you can ask questions, you can get help, you can get support in real time. Um, we have a daily accountability group. So, if you find this stuff like overwhelming or you find this stuff, for example, like information overload when it comes to AI automation updates, well, basically, you can set your goals like uh Kevin has here. And then you can say, "Okay, these are my goals for the day. This is what I'm going to focus on. " Everyone holds you accountable, and that way you can share your goals. You can share what you're working on, but also you can stay 100% um focused and accountable to achieving your goals, right? Um, so there's lots of cool um questions in uh you can post questions inside the community, you can ask uh for help, etc. Inside the calendar, you actually get uh video calls as well four times a week. So you can jump on live video calls, get help and support whenever you want to. Um, inside the classroom as well, you can actually check out our fiveweek AI automation masterass which takes you from beginner to expert in just 5 weeks. Plus, you learn how to build your first air agent in under five minutes. You can get our best playbooks for Twitter, um, newsletters, shorts, Instagram AI avatar videos, and, uh, how we automate X as well. And then if you want to get the latest updates and all the new trainings, you can see that we add new daily guides right here. So, for example, today alone, we've added four new guides, which is awesome. Um, you see a bunch of guides down there, too. And then also, you can learn how to get more clients with the agency course. You can watch about the coaching course. You can, for example, learn how to rank number one with AICO. and also how to grow our YouTube channel based on what's working for us. So that's all inside the AI profit boarding link in the comments description or go to the arprofitboarding. com The other cool thing about this as well is you can post your wins. You can share your wins. Um you see, for example, Eric landed uh his first new client. Um Dan also, he'd only been a member for like two weeks. He already got his first customer. And uh you can see here for example Paul has already got 100 stars on his GitHub project as well. So there's all sorts of cool stuff and cool [snorts] projects and cool people working on amazing stuff inside there too. Dwayne says, "I'm about to post my daily test my accountabilities. I haven't done it yet. " Get yourself on there, mate. Yeah. Yeah, that's great that you're taking part on it. So well done to you. All right. So, today we are going to be looking at agent zero. If you've never checked this out, this is a powerful AI agent that can actually do things kind of similar to open claw. When I've actually tested them side by side, honestly, agent zero has won both times, which is pretty crazy. And today I want to test setting up agent zero but running it for free using a local model. So I'm going to be testing out today and we'll get started with this with the quick start. So I'm just going to copy the commands over here. And then we are going to run a new command right here. I'm just going to make sure that I have a local model running to power this. And the way that I'm going to use agent zero for free, which again is a powerful way to use um AI agents to build whatever you want. We're gonna set up Olama, which I've got running in the background over here. You can download it for free. And then I'm going to use GLM 4. 7 Flash as a model to build with this. Now, this is um the strongest model in the 30 billion parameters class. And um it's pretty lightweight for deploying, but it runs nicely and it's designed to be lightweight with the flash model, right? Um you can also run this for free inside O Lama. We're going to be using it
Segment 4 (15:00 - 20:00)
inside um agent zero, but you can see here as well, you can run it for free with openclaw as well. So, it's a pretty powerful model. I'm running it on my Mac Studio. That's how I can run it. Um, there's loads of local models you can use inside O Lama. So, this is pretty fun to check out. And we're going to get started with this. So, the first thing that I need to do, if you haven't already, is just make sure you have this installed, which I already have. If you haven't got it installed, make sure you download O Lama, and then you can run this command inside your terminal to install it. Once you've done that, then you can open up your terminal, paste in that command, and that will get uh GLM 4. 7 Flash running in the background. Right now, you can see that's running here. So, that's working nicely, which is great. And then what we're going to do is just get agent zero running so we can start using it with O Lama um and basically get a free AI agent running locally. All right. Um, and it's a pretty powerful one from what I've tested so far. So, let's get straight into this. We're going to just find the quick start on the page. Here we go. So, we're going to copy that command. Then we'll go over to terminal. Um, and then we shall paste that in. Hit allow. And you can see here we've got GLM 4. 7 flash running with alarm on the left hand side. And then we've got agent zero running inside Docker. Um, it's just pulling in the information right there. So, if we go to Docker Desktop, Agent Zero will begin to run in a second inside the container section. There we go. So, we can open this up. And then if we go to here, you can see that we now have agent zero running. Beautiful. Now, what we need to do is go to our settings. And inside the settings, we need to configure OALMA to run with this directly. I thought I'd give myself a little challenge today. Um, and we just need to set up the model for running this as well, right? So, we're going to run Lama and then just select the chat model name. So, I'm going to select GLM4. 7 flash. I hope it works with that command, but let's see. In fact, do you know what? I'm going to take that go to Claude and we're going to try and just make sure we get the right settings here. So, let's make sure we get that working. And then I'm just going to paste in the documentation from Lama 4. 7 flash. And then it will give me the model details here. So we're just going to grab that. So, I think we need to add the chat model base URL. Let me just double check that. I think it'll be that one. So, this I'm not 100% sure if they're the right settings, but I'm going to test them anyway. Hit save on that. See if that's working. Boom. Yeah, we're running that for free with GLM 4. 7 Flash. Wow, that was easy. That was incredibly easy. Um, so just to recap on the settings there, these are the parameters I've set. Um, so just make sure you have GLM 4. 7 Flash running in or whatever model you want to run. And then these are the settings right here. I'm actually going to paste that into our documentation for agent zero if you want to run it locally. So, if we go to the AI profit boardroom here, you can see that we've got a full section on how to use agent zero. And I'm going to just add a section here that says um a llama API settings.
Segment 5 (20:00 - 25:00)
And I'm running through uh Docker. All right. So, you got the full API settings there over here. So, if we go back to agent zero now, that's pretty cool. Um, that seems to be working. Obviously, I didn't set up the memory or anything like that. Just the basic API settings. So if we scroll down here, I think we can set up the rest of the models, for example, for utility and that sort of thing. So let's try and change to alarm. We'll copy the same settings. There we go. Cool thing about this is like you it means you can run this for free, right? This is all free to run, which is pretty awesome. Um I don't think it will have a web browser model with the local model. Um but you could set for example something like you know open router with um cloud sonet for example something like that and then you've got the memory settings over here as well if you want to set that up as a extra extension. So, if we run to a new chat here, we're going to say, okay, um, build a Pomodoro timer in HTML, then launch it. We'll see if that works. The thing is, it could be responding inside the chat, but I don't know if it's going to work properly locally um when it comes to coding stuff out. So, we'll just check that out and see if it works right here. If it does, that'll be great. Let's see what we got on the questions. So, Clawfield says, "Good morning. " Good morning to you, too, sir. Thanks for joining. So, you can see it's beginning to write terminal commands to build out our project, which is awesome. And by the way, like if you're watching this, you're like, do you know what? I don't want to use agent zero, but I do want to run local models for free. Um, then you could always use these commands for running, for example, like claw code or openclaw, right? So, let me show you an example how that looks. If we copy this um command for openclaw, we can open up a new terminal window. Make sure that you've got glm 4. 7 flash running as always. Paste that in like so. Um, hit yes. — [clears throat] — And now you can see that's running via this gateway, right? So if we copy that, we've got this running um inside Lama as you can see. And then also if you want to run this with core code, you would do the same thing. So, you just open up a new terminal window. Um, and by the way, this is just gra that. This is the command for claw code. You say yes, let's go. And now you can see GLM 4. 7 uh GLM 4. 7 flash is running with claw code as well. That means you can use claw code for free with this local model. Now, if we go back to agent zero, this is beginning to code out, which is great. And we'll test out the outputs in a minute. GLM 4. 7 Flash is pretty good so far. I mean, you can see even here it's actually added audio inside the project, too. So, it says, "User wants a beautiful fun Pomodoro timer and HTML. I'm going to create it with embedded CSS and JS for smooth animations, features to include, web audio, make it responsive, blah blah. " Um, and it's beginning to build that out, right? Dwayne says, "For those that have just joined, what are we doing here? " So, we are using agent zero for free with Olama to run local models. Why are we using agent zero? Because it's one of the most powerful AI agents I've ever used. Um, and when I've compared it to Open Core, it actually does a better job and it's easier to set up. It's much smoother and also you can use it for free, right? agent zero is free and then also the local model that you're running is free. So that's what we're doing today. And just to test it out, we've already tested out the chat which is great, but we are now testing this out in terms of running um uh a quick task which is building out a Pomodoro timer. And Richard says, "Evening from Sydney, can you set up the memory the memorize error so on? " I might do that in a
Segment 6 (25:00 - 30:00)
future um episode. If there's enough demand for it and let's see how it goes. Let's see if we need to configure anything else inside these sections here. I mean like if you wanted to, what you could do is just add an API key for open router, right? Like um and then you just use that for the web browser model or something like that. So you could just grab an API key for that. like this. Eddie says, "Hello. Good to see you here. " Claw says, "Do you think it's wise to run two agents, one in the cloud and one local? If so, you can sync them. " I think it'd be good to have like for example one cloud API and then one local API and then like for example you've got web search enabled for different APIs, right? So you could have for example cloud sonic 4. 6 and that's literally the only cloud API you use and you use that just for web search, right? So it can connect to the internet, it can search across the web, etc. And then for everything else like all the agents, the sub aents, everything else like that, then you could use something like uh GLM 4. 7 flash
Segment 7 (30:00 - 34:00)
And then you can see it working over here, right? So this is the AI um responding. So it's like the user is asking me for ideas about AI automation. Here's some creative AI automation ideas, etc. Um so it is working on the right hand side with Docker, which is pretty nice. So, I do think the memory thing needs to be improved, but you can see here in the chat, it's working well. Um, so it's coming up with ideas on how to use this. It's creating multiple different chats for different tasks, etc. And we're running this for free locally with Volama. Let's see what else we got here. Can you use local models with open claw? Yeah, you can. Yeah. Yeah, you can use g you use the same model like I showed earlier. So, you can use gm 4. 7 flash training inside there. I profitable on that. Uh Henry says, "Do you have a YouTube channel? " Yeah. So, I'm uh on Julian Goldie SEO. Uh Claw says, "Cheers. cuz I'm running Kimclaw sync with my GitHub files. And uh one Perk says, "Have you tried ZeroClaw? " Yeah, I did test out Zero Claw. Um it was super lightweight, super fast. And then Alejandra says, "Looks like you need 16 GB memory to run GLM flash. Good thing is these models will be improving and maybe you'll need less soon. " Yeah, if you want uh I mean if you want like a super lightweight local model, you can use something like Gemma 34B which is by Google. It's designed for mobile devices, so it runs really fast on anything else or anything um that's at the same level of power. So, thanks so much for watching. If you want to get a full guide on agent zero, you get that inside the profit boarding. We update these guides daily as you can see right here. So, we actually just covered the new open claw update as well inside there. Um, we update this with like the new um, things that are coming out, all the latest AI automation um, things. And we also have the full guide on how to set up the local build with GLM 4. 7 flash on Olama paired with, for example, openclaw or for example agent zero or even claw code. So feel free to get that link in the comments in description. We also have a six-hour course on openclaw. And uh, like you see, we just every day we add new guides inside here too. This is available link in the comments description or go to the aripboarding. com. We have 2,500 members inside here. So, you can ask questions, you can get help and support inside the community. You can meet cool people. Um, you can see that lots of people are winning with this as well. Um, you can see here that Mvin said, "This is the best learning experience I've ever learned in my life. " Um, and he says, "I'm very thankful for Julian Goldie and his teachings. This is my second day. Um, and I'm sharing my experience. " So, you know, it's awesome to just see people winning with this stuff. You can see like how many people are liking it and that sort of thing and how much you enjoy it. So, it's pretty cool. Um, and it's great to see people winning with this stuff as well, not just me. So, we also have an accountability group inside here, so you can post your goals, you can stay focused, that sort of thing. We have four weekly coaching calls. You can jump on the coaching calls live, ask help, um, get support in real time, share your screen, meet cool people. You can actually check out people in your local area by zooming in on your location in your city and then just DM me people in your local area and saying, "Hey, do you want to meet up or jump on a Zoom call? " That sort of thing. And then we have all of these different trainings inside the classroom. So, for example, you can go from beginner to expert in just 5 weeks. Plus, learn how to build your first AI agent in under five minutes. Additionally, you will get our best playbooks that I personally use, like for example, how to automate eggs and shorts and Instagram and AI avatar videos. If you want to learn, for example, how to get more clients, we've got an agency course. If you want to watch about the coaching course, you can watch there. Uh you can learn how to rank number one inside Google and AI search engines with AI SEO automations here. And then also you can learn how to grow a YouTube channel in just six weeks using my master class set based on what's working for me. So feel free to get that all inside the AI profitable