# OpenClaw Browser AI Agent + GLM-5 Turbo

## Метаданные

- **Канал:** Julian Goldie SEO
- **YouTube:** https://www.youtube.com/watch?v=VsWDJpswOdk
- **Дата:** 16.03.2026
- **Длительность:** 1:01:21
- **Просмотры:** 1,385

## Описание

Want to make money and save time with AI? Join here: https://www.skool.com/ai-profit-lab-7462/about

Video notes + links to the tools 👉 https://www.skool.com/ai-profit-lab-7462/about

Get a FREE AI Course + Community + 1,000 AI Agents 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553/about

In this livestream, I'll show you how to set up OpenClaw — a free, open-source AI agent that takes control of your browser and does real tasks like Googling, writing Google Docs, sending emails, and posting tweets, all on its own. He also covers GLM-5 Turbo, a brand new AI model designed to handle long, multi-step agent tasks better than anything else out right now, and shows what happens when you combine it with OpenClaw for a next-level automation setup. If you've ever wanted an AI that actually does things for you instead of just chatting, this is the video to watch.

## Содержание

### [0:00](https://www.youtube.com/watch?v=VsWDJpswOdk) Segment 1 (00:00 - 05:00)

New OpenClaw browser use AI agent automate anything. OpenClaw controls your rear browser. Now OpenClaw just shipped a brand new update and your AI can now control your Chrome browser. Not a fake one, your actual Chrome with your actual login inside, which means your AI can open your Gmail, check your Google Docs, fill in your Google Sheets, send a tweet, even send an email all by itself by clicking. No copying and pasting. And you just give it a task in plain English like you see right here. and watch it get done. We're going to test all of this live together. I'm going to show you how to use the full setup from scratch. You don't need any technical experience to use this. And then we're going to throw real tasks at it. And if you've ever wanted your AI agents and your openclaw to browse the web for you, this is that moment. So essentially, you can see an example of how this works in the chat right here. Uh so we said, "What's the price of eggs? " And then it googled the price of eggs today inside Chrome and then gave us the details right back there. And this works really well. We're actually using a new model called GLM5 Turbo to do this. It seems to work with any model. And the way that this works is through the Chrome browser, right? So you can see here OpenCL browser relay started debugging this browser at the top right here. And basically we can use this to help us start operating our browser with the AI agent. So let me talk you through exactly how this works and how to set it up. So basically you see here we've got a Chrome browser just for openclaw which is this one right here. And basically this is connected to our Chrome browser inside openclaw. Now, if you want to set this up yourself, I'll show you how to do that in a second. In the meantime, let's just do another demo here. So if we say okay go to Gmail and check or and draft an email about SEO to mejulian. com You can see that it loads up Gmail and starts to compose the draft. Right? So, it's loaded up Gmail here and you can see it's opening up a new message and then it will begin to write the message itself. So this is openclaw operating our Chrome browser for us and beginning to send emails. And then you can see which tools it's using here. Another option you have for doing this is you can click on the Chrome extension, right? So you can get it to control your actual Chrome directly or Chrome via a Chrome extension for OpenClaw. Here's the details of that. And if you want to get access to it, you can just go to the Chrome web store and then you can add

### [5:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=300s) Segment 2 (05:00 - 10:00)

this to your open claw, right? So, it's called openclaw browser relay and that can also operate it, but it's a bit more manual than using the normal chrome setup. The reason for that is like because you have to click to enable this for it to start working, right? You can see it right there. So, how do you set all of this up? Basically, what you want to do is go to this address right here. Right? So, you and then you click and enable allow remote debugging for this browser instance. That's method number one. Once you've done that, you're then going to run one of these inside your terminal. Right? So for example, openclaw browser profile user start right and that will use the new update from Google that just came out recently which you can see right here to allow your coding agents to debug your browser sessions so that you can operate OpenClaw directly with Chrome Dev Tools MCP and that's what allows it to control So when you give it a command like for example go to Twitter and post a tweet for me and you say use a browser profile to do this right it will give you an option to allow this right here. So you just click on allow here and you can see this is now connected to our Chrome. Right? So it's using the user profile to connect to our Chrome. Once we've done that, you can see it working right here inside Twitter. And that's where it can start drafting the tweet for you. So step number one is you allow remote debugging. Then you give the command inside openclaw. Then it give you a popup and you allow it from there. And then method number two is you can allow the Chrome extension like so. And you just attach it to the relevant tab. And it can also control like so. I think it's a lot easier just to use the Chrome debugging method which you can use over here.

### [10:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=600s) Segment 3 (10:00 - 15:00)

So, how does this work? Well, basically this is in a new update. It's called live browser control. And this comes from a brand new feature that Google just added to the latest version of Chrome. Before this update, OpenClaw could only control a fake browser. Or it could use your Chrome extensions, but you have to attach it manually to each tab, right? with the Chrome extension this way using your real browser is user profile method right and basically on this new update with open claw and Google's MCP dev tools your AI can now access everything you're logged into right so Gmail notion LinkedIn um all of that sort of stuff so if you need it to for example like automate something where it requires an account you can do that with open claw now if you want Okay, how does this work step by step? So, as I showed you before, make sure you have the new version of Chrome, then enable remote debugging, then start the user profile from your terminal using one of these commands, and then from there, just check that it worked, right? So, for example, you can go inside openclaw and just say, okay, like go off and do this as an example. That is how it works, right? So, we said go to twitter. com and write a tweet. It uses the tools. It opens up Twitter and then it's like, okay, the dialogue is open, logged in, is this account. What do you want me to tweet? And that's basically how it works, right? You can see that working inside the chat right here. So, really powerful tool, really easy to use. I will warn you that it doesn't work every single time. So, for example, sometimes it will fail, sometimes it'll struggle, sometimes it gets confused. And this particularly depends on, I think, which API you're using. So you can see, for example, if we're using something like Claude, well, Claude is really good for computer use. So it's going to understand how to navigate your screen better. If you're using something like, for example, GLM Turbo, it's not really designed for computer use. And so it may struggle compared to using something like GPT 5. 4 or um or Claude directly, right? So it really depends which API you're using here. So that's basically it. If you want to get a examples of what you could do, well, you can say like, you know, use my browser to check Gmail or um check my notion, you could say, okay, Google the latest AI news, use my real browser to log into a dashboard and take a screenshot, blah blah. Um, and it's a really powerful update. Now, if you want to get a full 30-day plan and road map on implementing this stuff, you can see it right here inside the AI profit board and video notes from today along with 100 prompts you can test with Open Claw as well. You see all of these different prompts you can use with Open Claw directly. That's all inside the AI profit boardroom link in the comments in description. It's a very simple setup for using OpenClaw Chrome, but it works really well. If you want to get all of the video notes from today along with an AI automation community that's designed to help you learn, scale, and grow with AI automation, you can get that inside the ARI profit boarding link in the comments description or go to the arring. com. Inside the community, you can ask questions and get help and support and meet some really cool people. Inside the calendar, you can get weekly video coaching calls where you can ask questions, get help and support. Inside the map, you can meet people in your local area, DM them, meet up with them, etc. And then inside the classroom, you get access to all of my best trainings, including my best open claw trainings right here. Right? So, you can see every single day we update this with new trainings and guides based on what just came out. So, we're always coming up with like the new latest stuff that comes out. And that way, you just never fall behind again because you got, for example, like a six-hour course on OpenClaw right here. You have uh all the details on how to use a new GLM 5 turbo update with OpenClaw, how to use browser use with OpenClaw, etc. like everything you need with new updates, we condense down, we research for you, and then we show you exactly how to implement it into your business with a 30-day road map like you saw before. Okay, so that's all inside here. The other thing that I was going to show you is at this point, we have over 137 pages of wins and testimonials and awesome reviews from people. So, you can see all of those right here. And basically this is just a helpful community with lots of people growing, learning, pushing forward and using this stuff inside the business to actually take action and implement stuff. So hope to see you inside there. You can personally connect with me inside there too and I'll see you on the next one. Cheers. Let's see what we got in the questions here. So Pete says, "Can a low specification laptop run the new update of OpenClaw with OAMA models locally? " It depends which model you're using, right? So it depends what the laptop is and that sort of thing. What I would say is if you're trying to run local models, usually you do need a powerful setup, right? So for example, for me, I run local models via Lama with OpenClaw and I do that on a Mac Studio, which is really, really powerful. Now, if you have an older

### [15:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=900s) Segment 4 (15:00 - 20:00)

laptop and a less powerful setup, then you would just run the cloud models with Olama, which you can still do for free and they're easier to set up. Uh Big Thunder says, "Do you read YouTube chat replies? " Yeah, I'm reading them right now. And then Techno says, "Salve, good to see you. " What else we got here? Marello says, "Hello from Atlanta. Good to see you. " And yeah, that's basically for the replies today. refund says, "I have a heavy workflow. I'm trying to optimize. It'd be great for business. If so, um would love to maybe chat about it. " Yeah. So, if you join the AR profit boardroom, you can jump on the weekly video coaching calls if you want to chat about it and jump on a call. And then if you want to speak to me personally, just DM me once you've joined the AR profit boardroom. That's the only place I actually answer questions and um DM people and that sort of thing. So, feel free to jump in there. GLM 5 Turbo just made your open claw more powerful than it's ever been. This thing just launched and it was built from scratch to run inside AI agent tools like OpenClaw. Plug it into OpenClaw and your agents get a serious brain upgrade. Without this, your open claw is barely scratching the surface. You're doing half the work yourself. You're leaving a ton of power sitting on the table unused. And also, it's going to be way slower. So, with a GLM5 Turbo, it keeps going. Your agent goes from basic to beast mode. And today, I'm plugging it straight into open claw, and we're going to find out exactly how much better it actually gets. So, let's get straight into this. If you haven't seen this already, this is GLM5 Turbo. It's actually designed to run a lot faster. So, if we say, okay, what are you doing right now? is using a GLM5 Turbo as you can see right here to respond. Right? So if we ask it, okay, what API are you using? It will reply to us. And also bear in mind here, you see how it's got a bit of more personality right now. So you can see here we said what are you doing? Says waiting on you mate. Um and then also you can see here that it's got the model running like so. Right. Also I've been using this a lot and it's very cheap to use as you can see right here. Uh way cheaper to use than something like CLA which is awesome. And this is literally just been released right released um just now. And this is a new model from Zed AI which is a Chinese company Jeru AI designed for fast inference and strong performance in agent driven environments. Specifically mentions open claw like this is literally a model just designed for open claw and that's how powerful this is. Now what's the difference here and what does this mean? Well a brand new model has just dropped and this is specifically for agents right? Most AI agents and AI models are built for chatting. GLM5 Turbo is built for doing. Z AI just released GLM5 Turbo. It's really not like other models because it was designed from the ground up to work inside AI agent tools like Open Claw. Now, you might be wondering, okay, what even is GLM5 Turbo? So, this is a brand new model made by ZAI. It's like a super smart robot brain that was trained specifically to use tools and tasks. Now, this is really designed for work. That's the difference. Most AR models are great for conversations. GLM5 Turbo is designed for work, right? It was trained on real world agent workflows, meaning practice doing the kind of jobs that open does every single day. It has a 200k token context window, but is very, very fast to reply. And essentially, the reason this is a big deal is that most AR models get confused when you give them long complicated tasks. GLM5 Turbo was specifically designed to handle that. Here's why.

### [20:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=1200s) Segment 5 (20:00 - 25:00)

Number one, it's got better tool use, right? though is much better at using tools without making mistakes. As an example of that, if we actually go over to the chat where I've been using GLM5 Turbo, right, you can see here that it can actually quickly handle controlling my browser, right? So, if we go inside here and we're like, right, okay, go to twitter. com and write a tweet. This is what I asked OpenClaw to do. And it just went off and just did that for me. Right? Here's another example. So, I said, "What's the price of X? " and it used the tool open browser use and the relay with Chrome extension and you can see here that it came back to us and used our browser to reply to us. So number one it's really fast but number two it's pretty good at calling tools which makes it much better at replying. Now you can actually see inside the usage of this model out of all the apps using GLM5 Turbo the top public apps using this month are OpenClaw like that is by far and away the number one model this is being used with right the number one app this model is being used with and so open claw is being made a lot faster with this GLM5 Turbo as well and bear in mind like this is way cheaper when using the API of something like uh claude as well. Now the other thing to note here is like it follows complex instructions better. So sometimes you give open claw a long complicated instruction with many steps. Old models lose track partway through. GLM5 Turbo can break down complex instructions planning all the steps and actually finishing the job. Understands timebased tasks as well. So it can let you schedule tasks. is faster on long tasks because you know for example with open core it's running a huge job with lots of data and it needs a model that doesn't slow down right now GLM5 turbo is optimized for high throughput long chains what that means is big complex jobs that run faster and more stably now you might be wondering okay how does this perform on benchmarks well for example on the SW bench verify test which is just a test for seeing how well AI can fix real coding bugs claude oper scored 80 GLM5 itself C scored 77 and Gemini 5 3 Pro scored 76 turbo right um 76. 2 too. GLM 5 Turbo sits right at the top of this as well. Now, you can see how much it costs right here in terms of input and output tokens. And if you're wondering how to set this up, there's a couple of different options here, right? You can set up the text like this, right? Inside the um inside a call like this, right? But the easiest way that I think is like literally you just go inside G um open claw and you say to open claw okay using this API key switch to open router and then you paste in the documentation from GLM5 Turbo. That's what I did earlier today. So I literally grabbed the documentation from open router, plugged that into open claw, gave it the open routter API key and you can always just do that in a config file as well. You don't have to do that directly inside openclaw and then just ask it to set it up right and you can see it working right there. The other option that you have is you can just give the details to claude code and you can say to claude code okay for my openclaw installation add glm5 turbo on open router with access to it. Now the other option that you have is you can go directly to ZAI. So it's available on the coding plan max right you can see that right here. If we go to coding plan max you can get access to GLM5 turbo right there. I honestly think it's easier to use via open router honestly. And then if you want to see the documentation on GLM5 Turbo, you can see it right here as well. Right. So in terms of a rollout schedule, if you're a pro user, it's coming this March. Right? If you're a light user on Z AI directly, then it's arriving in April. Now, this is not for the API. This is for Z AI directly, which has kind of like an agent setup, right? So if you go to Z AI directly, you can see here that we've got different models and GLM5 Turbo is not on that list just yet. All right. However, if you want to access it straight away, if you want to start using it inside Open Router, you just go to Open Routter, go to GLM5 Turbo, you can get an API key from your key section inside the settings. So you just go to API keys over here and then you can grab an API key for open router, plug in the documentation for GLM5 Turbo, and boom, shakalaka, you've now got it connected to open core. That's how everyone's using it right now. So, it's fast, it's easy to use. Um, you can see a lot of people using it on Open Claw. But just to recap here, GLM 5 Turbo is a brand new AR model released on March 15th. It was built specifically for Open Claw and AI agent tasks. It has a 200K context window and can output up to 128K, which is not as good as

### [25:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=1500s) Segment 6 (25:00 - 30:00)

something like, for example, GPT 5. 4 or for example Claude, which just released a million context window as well. So it's not got the same context window as some of these bigger models, but it's cheaper and it's faster and that's the difference, right? It's also designed for stuff like this, right? It's literally designed to be working inside open core. They even mentioned it in their official announcement. And it's better at tool use, complex instruction, schedule tasks, and longchain execution than most models. Score pretty well in swench. And you can switch it by changing one line in your settings profile. and it's the number one user of GLM5 Turbo in the world right now is Open Claw itself, right? And that's how it's being used. Now, if you want to get a 30-day plan on using GLM5 with Open Claw, GLM5 Turbo, you can get that inside the AR profit boardroom along with 100 prompts that you can test with this directly. Right? So, if you want to test out OpenCL and GM5, we've got prompts for research and intelligence, content creation, etc. So, let's test this out now. We'll come up with a couple of examples. and show you how this works. Right? So, if we go to the arr. com, we'll copy the information from this page. Then we'll go back to openclaw here, right? We're using GLM5 turbo. You can switch between models by the way inside the drop down. So, every time you add a new model, you can now switch between them, which is pretty cool. And then what we're going to do over here is we're going to say, okay, create a beautiful website plus open it up locally in basic HTML. Make it look fun, interesting. Now we can throw this task at it and it will code up locally a landing page using open claw. Now, if you connected this to something like Skill Boss or Netify, it could actually even deploy the website for you. But we'll test how it works and see how it works uh step by step. Now, also you can go over to the daily usage inside this section and see how it's performing. Here you can check out your latest sessions and you can see which agents are running plus which model they're using. Right? So we're using GLM5 Turbo for our agents as well. And we can switch between primary models over here too. And that was pretty quick. Like you can see it's now created the page, right? So it's use these tools to create it using GLM5 Turbo. It's like done. Open it up in your browser. Here's what it's got. And if we actually look at the page, right, this is the page that it's created. Looks pretty nice. Super easy to create. and we've just built a whole new website for our community with one click, right? With one single prompt whilst I've been talking to you. So, if we compare that to the original as well, right, we're going to do split view. So, here's the page that OpenClaw built, right? And Claude previously built for me with Open Claw, right? And so this is Claude Opus 4. 5 that built this with open claw. And then if we have a look at this page instead, this is GLM5 Turbo. Now, in my opinion, I would say this looks a lot nicer, a lot more interesting, a lot more fun and engaging than the original that is built right here. Right. And so GLM5 Turbo is cheaper to use, faster to use, designed for agents, and super quick and easy to set up and run tasks with, right? And it completed that task

### [30:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=1800s) Segment 7 (30:00 - 35:00)

perfectly, as we said. So that's basically it. If you want to get all of my setup instructions for using GLM5 Turbo with Open Claw, you can get that inside the arrit link in the comments description or go to the arprofitboard. com. And basically this is a super powerful way to use open claw directly with a new model literally just came out today. And additionally this is a helpful community that helps you learn grow and it's very supportive when it comes to learning AI automation. So for example you see inside the community we have 2,600 members which means there's always people online ready to help and ready to connect with you. Also you can ask questions, you can get help and support whenever you need to. You can share your wins or share what you're working on which is pretty cool. We have a daily accountability group right here. And we also have a weekly update where I look at all of my research and I condense it down into a quick 5minute read. And I would say like if you want to save time, if you feel overwhelmed with this stuff, if you get shiny object syndrome, this is one of the most useful things you can get from the community. Now also inside our doc here, you can see that we have over 137 pages of testimonials from people winning with this stuff. So lots of people getting awesome results as you can see. And that's what I love about this community is we're just trying to help people. We're trying to support get them wins and actually try and help them implement this stuff rather than just learning about it. Right? This that's the most important thing. So you can see all the wins here. It's pretty much a infinite scroll at this point. Lots of people and lots of cool people learning lots of new stuff which is great. And also inside here too, not only do you get the community, but you also get weekly coaching calls. You can jump on live weekly coaching calls where you get help and support with AI automation. You can jump inside the map and meet people locally in your area. And then also inside the classroom here, you get access to all my best training. So you can go from beginner to expert with AI automation and learn how to build your first AI agent in under 5 minutes. You can also get the playbooks I personally use for my business inside this section. So if you want to learn how I automate AI avatar videos, shorts, Instagram, newsletters, Twitter, etc. All my best playbooks are inside here with video tutorials, prompts, and step-by-step guides. If you want to get the new daily updates based on what's working today, including the setup instructions, the 30-day road map, and 100 prompts on how to use OpenClaw with GLM5 Turbo, you can get that inside this section. Basically, every single day, we look at what's come out. We give you video tutorials. We give you step-by-step guides. There's even a six-hour guide on OpenClaw and another three-hour guide right there. On top of that, you can learn how to get more clients with the agency course. You can learn how to rank number one with AICO. You can also learn how to grow a YouTube channel based on what's working for me all inside the classroom. So feel free to get that link in the comments description or go to the aiprofitboy. com. Jason says, "Yoyo, welcome here. " Yan says, "Is it fast? " Yes, it's very fast. And PN says, "What computer can run? " I assume you're talking about local models. Again, it depends which local model you're talking about on O Lama. So for example, if you want to run like GLM 4. 7 Flash locally, then you would use the Mac Studio, right? If you wanted to run something like Gemma 34B locally, then you could use just like a mobile phone. So it really depends what local model you're trying to run. But what I'd recommend is that you get your specifications, you grab the models from Alama in terms of which ones you want to use, and then you put them inside Claude and you ask Claude, "Okay, based on my specifications and based on the models I want to use, which models you recommend for running on my laptop with these specifications via Llama and that way you can get some really good guidance as um, you know, super technical and focused on like just making sure you get the best outputs from the local models you use. G Oh, there we go. GLM 5 Turbo just launched and it's built for one thing. Running AI agents faster and smarter than anything else out there. This is a brand new model from ZAI. It just dropped on Open Router and here's why it matters. Most AI models are slow. They break down when you give them a long big complicated task and they forget what they're doing halfway through. GLM5 Turbo doesn't do that. It was built from the ground up to handle long messy multi-step agent tasks

### [35:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=2100s) Segment 8 (35:00 - 40:00)

without falling apart. I can use tools. It can plan. It can keep going. It's actually used my browser inside Open Core for some tests. It runs inside Open Core. It was actually designed for Open Core and it's super fast. Right now today, I want to put it to the test. We're going to see what it can actually do and we'll see if this thing performs the way it's supposed to and if it is then it could be one of the best agent models you can plug into your setup right now. Let's get straight into this. So if you're not familiar with GLM5 Turbo is basically created by something called Z AI just dropped recently. Previously there was a model called GLM5 but this is a much ver faster version of this. Right now you can see here it's got a 200k context window which is not ideal but it is pretty cheap and it is pretty fast. So if the context window is not that big, it's okay because it runs pretty quickly and also you have um good compaction systems inside stuff like AI agents and open claw. Anyway, right now if you're wondering, okay, what is GLM5 Turbo? GLM5 Turbo is a brand new AI model made by a company called ZAI. It was released on March the 15th, 2026, literally yesterday. It is built specifically to work inside AI agent tools like OpenCore. Think of it like a brain that was trained from the very start to do things, not just talk. What I mean by that is like inside chat GPT you get responses inside something like GLM5 Turbo when you're using it with something like open claw well it's great at actually completing long complex tasks autonomously right now why should you care well if you use open core this model was literally built for you right so it ranks 77. 8 eight on SW bench verified, one of the hardest coding tests in the world. That puts it just behind something like Claude Opus and ahead of Gemini 3 Pro and that means you can solve real software engineering problems better than most AR models on the planet. Now, if you've never used this before, let me run through exactly how to get access. So, you can go to operator. com and then the chat inside here and then you can actually start using this, right? So, we could put it headto-head with something like Claude Opus. So if we type in GLM5, we got GLM5 Turbo right there. And then if we wanted to test this out, we could actually just take an example prompt like this. So we can grab the information from our profit board and community. We can go back inside here. We can click on create artifact. Then we're going to select interactive app or actually let's select landing page here. And then we'll say okay redesign plus create a beautiful website for this. Right. And then we can actually test this headto-head with Claude Opus 4. 6 and GLM5 Turbo. Now, I actually did this earlier today with Open Claw directly, right? So, you can see here that actually put in the same prompt and asked it to create a website for me. So, we said create a beautiful website for this for the AI profitable. And then it used its tools quickly to start building this out. Now, what's interesting about this is I gave it the prompt at 8:59, right? So, you can see here, 8:59 a. m. I gave it the prompt. Now, by 9:00 a. m., it already created the page and deployed the tool. Pretty insane. So, if we say, okay, open this up. And basically what this has done is built a better version of my old website in pretty much one single click within 60 seconds. That's how fast this works and also work directly inside openclaw as well. Right? So you can see now this is the new page that GLM5 Turbo built and this is the old page that we had running previously. Right? And so the new version, so if we compare like for example this section on the old version versus new version, this is animated. It's got a nice gradient. It looks a lot more interesting. It's got different colors. It's a better design, better UI, right? But the difference here is that we did this with GLM5 Turbo in 60 seconds, right? As you've seen from the timestamps inside open chat. Pretty crazy stuff. So we're talking about something that's fast. good. We're talking about something that's really powerful to use. I mean, if you have a look here, for example, they're competing head-to-head, side by side. Um, but it's definitely holding its own with other models like Claude, and it works really well with something like GLM5 Turbo as well. All right. Now, there's a few different ways you can access GLM5 Turbo. So, You can get the Z AI coding plan. This is like the cheapest if you're a heavy user, right? So, there's three plans on Z AI. There's light, pro, and max. Now, GLM5 Turbo is currently available on the max plan already. It will come to the pro plan by the end of March 2026. And the max plan um costs this much, right? If you're already on a plan that supports it, you can just update your config file to switch models like this. The other option is you can use open router which is what we used with open claw right so if you don't want a subscription you can access it via open

### [40:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=2400s) Segment 9 (40:00 - 45:00)

router just create an API key and you're good to go and then also you can use it directly inside zai API key right so you can use the endpoint like this set your model to GLM5 turbo and then you can go from there here's a quick start code on how you can use it but that's basically it so is it worth switching to I would say if you're an openclaw user who runs long complex automations and you usually use an API, I would definitely say yes. It's faster. It works really well. It calls tools really well. And then one thing to note here is that GLM5 Turbo consumes more than older GLM models, right? Because if you're on a Z AI coding plan, you got to be aware that your tokens will go faster. It's a text only model right now. And also the context window is not that fast either. Um for fastass, I would turn thinking off. That's something interesting as well. Like the thinking and reasoning model inside GLM5 Turbo can be turned on and off depending on your use case. Right? So if you wanted to use GLM5 Turbo at the fastest possible rate then you would switch thinking mode off. And for complex multi-step tasks like for example building a website like we just did a second ago then you would leave thinking on. So let's come back to open router now. We'll see where it's up to with building out these pages. So this is Claude Opus, right? As you can see right here, it does look pretty nice. Like it looks super nice. But if we go back to the page that GLM5 did, it's still pretty nice. The only thing I don't like about this is like there's a big space over here, but it's not bad at all. I mean, it definitely holds its own, right? Um, and if we have a look at this section here on Claude Opus 4. 6 six that's been generated and we compare that to something like GLM5 turbos, you see this section probably looks nicer, right? It's better designed, cooler emojis, better animations, etc., right? So, it does hold its own with these other different models as you can see. So, just to recap, GM5 Turbo dropped March 15th, built specifically for open claw and AI and workflows. 200k context window 77. 8 A on SW benchmark, 52 tokens per second. So, it's very, very fast. Already powering 1. 36 billion tokens on OpenClaw available now on Z AI's max plan and open router as well. Right now, if you want to get a full implementation plan with step-by-step prompts like you see right here, and full 30-day road map, you can get that inside the AI profit boarding video notes link in the comments in the description. We've also got inside here 100 prompts you can test out with GLM5 Turbo. And you could use these inside Open Claw or just inside open router depending on how you want to use it. But basically is a really powerful tool and it works really well. So thanks so much for watching. If you want to get all the video notes from today, you can get it inside the AI profit boardroom link in the comments description or you can go to the aiprofitboom. com. And this is a helpful, supportive community that's focused on helping you save time at scale with AI automation. Inside the community, you get help and support from 2,600 members. You can see right here. And you can ask questions. There's always people online. You can connect with some awesome people. You see that we have a daily accountability group. So if you want to achieve your goals, if you want to post your goals and work towards them, this is one of the best places to do them. If you want to, for example, get a weekly breakdown from me on what's worth paying attention to and what you can ignore, then you can check that out right there. And then also inside the calendar, we do weekly video coaching calls. You can get help and support on real life calls. You can jump inside the map and meet people in your local area, DM them, be them in real life, etc. And then also inside the classroom here, you get access to all of my best trainings. Now, at this point, you might be asking, okay, what are people's results like this and experiences? So, we actually have over 137 pages of testimonials from people winning with this stuff as you can see right here. So, lots of cool wins, lots of people getting awesome results, which is what we like to see as well. And then also, you can see all the wins and you know, all the positivity that's going on here. So, that's pretty cool. But also inside the classroom here, you get access to all of my best training. So, for example, you can go from beginner to expert with AI automation in just 5 weeks and learn how to build your first AI agent in under 5 minutes. You can also get my playbooks on how I automate Twitter and newsletters and shorts and Instagram and AI avatar videos. So, all of these come with step-by-step guides and playbooks like you see with video tutorials. And then, additionally, inside the SOP update section here, you'll get daily updates, including the full video notes from today, a full six-hour course on Open Core, another

### [45:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=2700s) Segment 10 (45:00 - 50:00)

three-hour course right here. And every single day, we're just looking at what came out, what are the most useful stuff for you, and then how can you implement that inside your business, right? As you can see, so we're just adding new upgrades all the time, new updates, looking at what's actually useful, and then giving you step-by-step instructions and prompts and 30-day road maps to implement this stuff. If you want to learn how to get more clients, you can check out the agency course. If you want to watch back the coaching course, let's say you didn't make them live, you can watch them here. If you want to learn how to rank number one in Google and other AI chat engines, you can check this out. And then also you can learn how to grow a YouTube channel with AI automation right here. So that's all inside the AI profit board link in the comments description or just go to the AI profitboard. com. Let's see what questions we've got here. What model would you recommend to run on an RTX 3060? I would ask Claude that I wouldn't ask me. I'm not an expert with technical stuff like that. So I don't have an RTX, so I couldn't tell you. I can tell you what runs on a Mac Studio, but on an RTX, I couldn't tell you. uh how is a GLM5 Turbo with SEO? I'm not 100% sure on that, but I have ranked with uh OpenClaw before and like even on older APIs, we ranked really well. Here's an example. So, I actually created an article with Open Claw and posted it to WordPress right here. You can see that ranking number one on Google. Um and it's also ranking inside the AI overviews. This article was fully generated with AI, right? So this was using openclaw and you can basically connect WordPress to openclaw, publish the article and then rank it directly via your WordPress um all automatically just by giving open claw the password. If you want to learn exactly how we do this just type in WordPress here and inside this section you'll see how we use openclaw with WordPress with a step-by-step video and a step-by-step guide on how to connect them both. that was actually ranked with something called Kim Kimk 215, but you could use GLM5 turbine. I think it'd be a lot better. Um, nerd says still kind of feel like AI. It depends what you're using it for, I guess, but yeah, you just want to give it good skills to train it up and help it improve. Jazz says, "Thank you. Always happy to help her. And thanks very much for watching. Olama just became an official provider for open claw. That means you can now run a fully personal AI system completely free with one single command. Open claw is your private AI that lives in your phone, connects to your WhatsApp, whatever. And until today, getting alarm working inside OpenClaw was quite difficult. You have to dig through config stuff, file setup, endpoints, etc. and hope that it didn't break. don't anymore. As of right now, one command sets the whole thing up. Every single model on O Lama works inside your OpenCore. And if your computer isn't powerful enough to run models locally, no problem. Alarm's free cloud models work too. You get the same experience without needing fancy hardware. This is a personal AI assistant that runs on your terms. If you're running it locally, then your data stays profile private and your models are free and your agent is live inside your chat app 24/7. So, we're setting this all up together today. I'm going to show you exactly what it looks like, how fast it runs, and which models actually work well for this. Let's get straight into this. So, this is the official announcement. Now, if you want to set this up with openclaw, so if you want to run with open claw, then you can just use this command inside terminal. You can see here they've said all models from Lama will work seamlessly with open claw. You can use it for the task you want. Um, and they actually dropped a shout out to Peter Steinberger, the founder of Open Claw, to set this up. Here's an example what the onboarding looks like. So, let's run straight into it. So, we've got the terminal running now and let's run the command to get O Lama working with OpenClaw. All right. And then from here we can run through

### [50:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=3000s) Segment 11 (50:00 - 55:00)

this the quick start using the existing values and then it's going to start setting up OAM as you can see right here. So we can change the base URL which we're going to do and then we can choose whether we want to use cloud or local right. So here's what it looks like. You got an option between cloud and local. You can select cloud and local or local. So if we go with cloud and local it's going to select the default app for us which is Kim comm cloud but you can use whatever you want right so when you're running freely on boarding that's how easy it is to set up oama and then you can restart the gateway here so previously we're using uh GLM 5 now we're using Kim K2. 5 cloud right we'll hash that in the TUI And now it's running OpenClaw with OAM. So that's how easy it is to get it set up. Now if we go over to open claw now we have this running directly with Kimk 2. 5 cloudama. Now one thing to note is there is a big difference between using a cloud model and a local model. Right? You can use both inside lama. I went with the cloud model simply because it's going to be faster to show you on a live video like this, but you can use local models as well. Now, some of my favorite local models to run with this would be GLM 4. 7 Flash, right? So if we go back to alarm here, there's a bunch of different models you can use with this. Uh Neatron is pretty good. Neatron 3 Super, right? That is a cloud model that you can run. I wouldn't run the local model cuz it's an absolute beast at 87 GB is a bit too big. Uh you can also use Quen 3. 5. That's another option. And you can actually get really small models to run with this. Right? So I've actually got some videos inside the air profitable how to do this. But essentially you can get these smaller models like this and you could run that inside OAM instead. Right? So you can switch between these. Now that's a huge model 81 gigabytes. You probably wouldn't use that. But you can see there's some smaller models here like uh 6. 6 6 GB and that will run on a smaller computer. One thing to note is when you're running models from Alama, if you're running them locally, the bigger they are, the more power your setup requires. So, for example, like GLM 4. 7 Flash, it runs well on my Mac Studio, but my Mac Studio is super powerful. Now if you were for example running on something really tiny like let's say for example um just an old laptop or something like that then you would use like I don't know probably use something like Gemma you know Gemma is designed for like smaller devices and that sort of thing and I don't know if that would work so well with openclaw. So in those sort of situations, you would use a cloud model instead. And that's my point. It's like depending on your setup, use local. If you've got a powerful setup like a Mac Studio and if you've got a um setup like for example something old or a laptop that's not powerful, then you can just use a cloud model, right? Bear in mind the cloud models are still free to use up to certain token limits, right? So if we go inside here and we say are you using o lama just to check how fast it is and if we go to our agents over here you can see this is running on alarm now too right now if we go inside the chat it says yes I'm on lama right now and it's pretty quick to respond like we asked the model this question at 926. It replies at 926, right? So, it's replying instantly basically. Now, if we go back to OAM here, one thing to note is there are limits on cloud models, right? So, if you use this with OAM, it's easy to set up. It's pretty quick, but you can see here that we do have a session usage limit. Now, I create loads of videos about this stuff. I show you all the time how it works, etc. on YouTube. Um, but you can see here that I'm nowhere near hitting my weekly limit, right? And that resets all the time. So just something to bear in mind there is like yes there are limits with cloud models. Will most people reach those limits? Probably not. So that's basically how to set this up, how to run it with OAM, how easy it is to use, etc. One thing to note here as well is

### [55:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=3300s) Segment 12 (55:00 - 60:00)

that this used to be much harder to configure. Now it's just a quick simple on boarding message. Right now why is this important? Well, Alama just became an profession uh an official provider for openclaw. And that means all OAMA models now work with OpenClaw, right? And the official announcement just announced what on the 16th of March. So, it was pretty early and it happened very recently. And this is really big for anyone who wants free private local models. I've shown you how to set it up, but the first thing you need to do is download OAM. And then you're going to run this command inside your terminal. Then you're going to pick up your model. Um, some really popular choices right now are Quen 3. 59B, Kim K215, and um, a bunch of others right here. Now, there's also something else called Hermes. I've not really tested this with Open Claw, but I'd be interested to know um, how it's performing because Hermes itself is like isn't it's becoming one of the most popular agents. It's kind of competing with Open Claw. Um but you can actually get the API for it directly via the cloud um from OAMA as well. So that's something else to check out there too. Now for each of these models as well, you can see that you can also run these inside code. So this is not just for open core users. You can run it inside codeex open code as well. Alarm is super powerful for a lot of different methods when it comes to using AI. And if you don't want to run it locally, no problem. You just use the cloud models instead. and they're way quicker and easier to set up as well. So that's basically it. Now, some things to know as well before you set this up. So number one is latency, right? So sometimes you're going to get slower responses with local models. This is normal. It depends on your computer's hardware. The faster your CPU and GPU, the faster your responses. If it feels slow, just use Kim Kle 5 instead. Right? Also, not all models are great for agents. So Alarm confirmed many older models are not that great at agent tasks. So I would stick to Kim K2. 5 cloud for most of those and then vision models are supported as well. Right? And that's basically it. That's how it works. What you can do you can use this inside openclaw. locally. Um you can use it completely offline. So just to recap is an official provider for openclaw. The setup takes one command. All open all models are supported. The best models for agents right now are probably communicated vive cloud and quen 3. 5. If you've already set this up manually, just keep your existing setup. You don't need to change anything. If you don't want to use it local, you can use alarm cloud models. There is some latency on local. If it's slow, just switch to alarm cloud. And this makes running free AI agents easier than ever before. So that's basically how to use it. All the video notes from today. We have tons of training inside the AI prof on how to use alarm with openclaw. So you can see for example we have a full setup guide on comm 5 and openclaw um nvidia neatron super 3 with oama and then also miniax as well. So this is all inside the profitable boredom link in the comments description. It's my a automation community that shows you how to save time grow and scale with AI automation. You can get help and support in there. You can um connect with different members. We have over 2,600 members inside the a profit boardroom. So, it's a very active community with lots of cool stuff going on. As you can see right here, you get a daily accountability group and you also get my weekly updates where I break down all the latest updates, what's useful, what to ignore, how to save time with it, how to implement it, etc. Now, inside the calendar, you also get weekly video coaching calls so you can get help and support in real time. And if you're wondering, okay, like does this stuff actually work? Is it actually useful, etc. So, we actually have over 137 pages of testimonials. That's not 137 testimonials. It's 137 pages of testimonials, right? So, a lot of awesome people just getting awesome wins inside there. We're all learning, supporting, growing, helping each other. It's very positive. It's very welcoming as a community. And also, the cool thing inside here too is you get access to the map where you can meet people in your local area, DM them, connect with them, and even meet up in real life if you wanted to as well. Now, inside the classroom too, you've got a fiveweek AI automation master class. that takes you from beginner to expert and you can learn how to build your first AI agent in under 5 minutes. You can also get my best playbooks on how I automate, for example, AI avatar videos and shorts and Instagram and newsletters and Twitter and everything else. You can also get my daily SOP updates, right? So, if you want to get the latest updates including, for example, yesterday we covered losses claw and how to automate your browser with AI and how to use communic etc. on open claw plus a six-hour course. You can see that we update this daily right here with video tutorials and new guides. And then you can also learn how to get more clients with the agency course. You can watch out the coaching calls if you miss them. You can learn how to rank number one with AI SEO inside Google and AI search engines. You can also learn how to grow YouTube channel with AI based on what's working for me. So that's all inside the AI profit boarding link in the comments

### [1:00:00](https://www.youtube.com/watch?v=VsWDJpswOdk&t=3600s) Segment 13 (60:00 - 61:00)

description or go to the AI profitboarding. com. Billy says got my Mac Studio just joined a GLM5 turbo download. As far as I know, it's not a local model, GLM5 Turbo. Um, but you can use GLM 4. 7 Flash. Maybe in the future they'll release it as an open source model, so you can download it and use it locally, too. That would be pretty insane. I'm sure it will come actually. And then Kade says, can you get your open claw to build a new website for you? Yeah, 100%. So, I've already done that earlier today actually. So, here's an example, right? So inside open claw I gave it the details for my air profit boarding website. Then I said create a beautiful website and open it up locally and it created it with this was using GM5 Turbo within 1 minute it already created the page right and opened up and actually looks better than the original. So you can build websites with it landing pages um you can connect it to WordPress you can do SEO with it as well. We actually have a lot of training here with it. So, if we type in WordPress openclaw inside the search bar at the top, you can get access to my trainings on how to use WordPress with OpenClaw right here. Right? So, it comes with a video tutorial and a step-by-step guide on exactly how to automate this stuff based on how I rank on Google using OpenClaw and all my best trainings right there. So, thanks so much for watching. Appreciate as always. I'll see you on the next one. Cheers. Bye-bye.

---
*Источник: https://ekstraktznaniy.ru/video/11093*