Ollama + Claude Code is INSANE! (FREE Local AI Coding) 🤯
12:05

Ollama + Claude Code is INSANE! (FREE Local AI Coding) 🤯

Julian Goldie SEO 18.01.2026 18 954 просмотров 244 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Want to make money and save time with AI? Get AI Coaching, Support & Courses 👉 https://www.skool.com/ai-profit-lab-7462/about Get a FREE AI Course + 1000 NEW AI Agents + Video Notes 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553/about Want to know how I make videos like these? Join the AI Profit Boardroom → https://www.skool.com/ai-profit-lab-7462/about Get a FREE AI SEO Strategy Session: https://go.juliangoldie.com/strategy-session?utm=julian Running Claude Code with Alama: Easy Integration with Local and Cloud Models In this episode, we explore how to use Claude Code with Alama to run both local and cloud models effortlessly. We provide step-by-step instructions to set up Alama, integrate models like G-P-T-O-S-S and Quin three coder with Claude Code, and demonstrate the cost savings and benefits of using open-source models. Plus, discover the AI Success Lab community for additional resources and support, complete with a 30-day roadmap and empowering beliefs to maximize your AI automation potential. 00:00 Introduction to Claude Code and Alama Integration 00:27 Setting Up Alama and Claude Code 01:19 Configuring Models and Running Tests 04:23 Benefits of Using Local and Cloud Models 07:16 Empowering Beliefs and Practical Applications 09:33 Community and Resources 11:43 Q&A and Final Thoughts

Оглавление (7 сегментов)

  1. 0:00 Introduction to Claude Code and Alama Integration 104 сл.
  2. 0:27 Setting Up Alama and Claude Code 189 сл.
  3. 1:19 Configuring Models and Running Tests 693 сл.
  4. 4:23 Benefits of Using Local and Cloud Models 642 сл.
  5. 7:16 Empowering Beliefs and Practical Applications 526 сл.
  6. 9:33 Community and Resources 473 сл.
  7. 11:43 Q&A and Final Thoughts 84 сл.
0:00

Introduction to Claude Code and Alama Integration

Today we're going to be testing out Claude code with O Lama. So essentially what you can do now is you can plug in Olama directly into Claude code to run Claude code and other tools with opensource models. So we're going to test this out and see how it performs. You can also use cloud models too or you can run local models on your machine and then run these directly through claude code. This is how you get started. These are instructions. We're going to just test this out and see how it performs. All right. Now, the first thing you want
0:27

Setting Up Alama and Claude Code

to do is you want to make sure that you can actually have OAMA running. So, you want to make sure that you have O Lama set up like you can see right here. So, you just download it for free. And then once you've downloaded it, you can actually run this with local models on your machine. Right? So, if we open up a lama here, we've got local models, but we want to integrate this directly into Claude. So, let's get started with this. Right? So, we're going to go into and start using Claude code. If you haven't already got cloud code set up, you can run it locally with these instructions. And then you're going to configure OAMA to run directly with your code. If you want all the links and the resources from today to just copy and paste this stuff, plus a 30-day plan, you can get that inside the AI success lab, which comes with a community of 46,500 people and all my best free stuff, including the full setup for Alama and Claw Code. So, let's get straight into this. We're
1:19

Configuring Models and Running Tests

going to make sure that we have OAMA running in the background. Let's see. Then, we're just going to update it. Then I'm going to open up terminal and you can see we've got GPT OSS running directly with claude code like you can see. All right. So how does this work? Let me just replay that for you. I'm going to just terminate this terminal window. And we'll set up again just to show you. First of all, you need to have clawed code installed like you can see. Then from there you're going to configure OAMA which you can do with these copy and paste instructions inside the AI success lab. And then from here you're going to choose which model you want to run locally. And you change your model based on this bit right here. So you do claude-model and then the model that you want to use, right? Or if you want to use a cloud model, you can use it like that, right? So for example, if we go inside here and we're like, okay, we want to use GPT OSS with CL code, you can run it like and then just make sure that you have the model installed. So if we wanted to install, for example, GPT OSS, we would download it like so let's say we want to use the 20B version, we can just download it here and it would download the model first, right? One of the fastest models that you can actually use is Gemma. So I've got Gemma 4B here and the one that I'm going to test is Quen 3 coder like you can see and we can also test out GPT OSS as well. We've tested this out. This is the OAMA integration with Claude code locally. So let's try this out the latest models. So we've pulled in quen 3 coder. Let's try it out now. Boom. There we go. Right. So now it's working. So you can see here for example we're actually using a cloud model. I'm using GPT OSS 20B cloud like you can see. And you can see here, for example, it's actually plugged that model directly into claude code, which means that we're using the API and the tokens directly from Olama, which if you're running a local model would cost you nothing. And then you can see here, for example, it's set up and ready to go. So just to recap on the instructions right there and how to set this up. First thing you want to do, make sure that you've lama running with the model. So for example, we've selected GPTOSS 20B cloud right there. Right, first thing. Second thing you want to configure alarm with this, right? So export the alarm and just configure it right there to run with claw code. Then what you want to do is you want to make sure that you've used this terminal command to select the right model. So for example, we put claude-model oss 20B cloud, right? So for example, if we go back into terminal now and we plug that in, it's going to say, okay, I'll need permission with your files. This means I can blah blah. We'll hit yes, continue. And then you can see we now have GPT OSS 20B cloud running directly inside claw code which is really cool. And then let's see if this works just as a little test. Boom. It's beginning to code out now using the alarm model. But we've config figured claw code which normally runs from like claude opus and that sort of thing and can cost a lot of credits. We've configured code to run with this local model or to run with a cloud model as well. And you can use this with whatever API you want to use, but I think Quen 3 coder would probably be one of the best ones. And then also using something like GPTOSS would be one of the best ones. Then we can begin to code this project out. And you can see it's beginning to code out right now.
4:23

Benefits of Using Local and Cloud Models

Now you might be wondering, okay, like what are the benefits of this? Number one, there's obviously cost benefits, right? So you don't need a Claude subscription. You just use a Claude code free with open source models. So there's no API costs. And also you can use this with cloud options, right? So you can access Alarm's cloud models when you need more power. So for example, if you wanted to use something like Miniax, you can actually use Miniax M2 as a cloud model as you can see right here. And this is like pretty much on par with Opus, right? And so you can code with this or you could code with GLM, etc. using the cloud model from OAM. Top of that, obviously you got privacy and control. If you're using a local model, not a cloud model, then you got local first workflows, more privacy, more control, and also you get flexibility here, right? So you can swap between like local and cloud models. You get the best of both worlds. You can iterate quickly with local models or use cloud models for heavy tasks. You can use the claude code interface or any other tool that supports anthropic API. And you're not locked into a single provider or model. And then also there's technical advantages, right? So for example, open source models. You get access to cuttingedge open source models, but you get full anthropic API capability, extended thinking mode, and the same interface, right? So if you're familiar with core code but you don't want to rinse your credits this is one of the easiest ways and also you get faster iteration so you can test your local models without internet you get offline capability resource control and also you get to experiment more with this stuff too and then how would you use it in reality for example like coding projects learning stuff production and then testing as well. So really cool free easy way to run local models inside claw code as you can see right here. So, we're building out this project right here. The cloud models with Olama, they do have limits, but the limits are pretty generous to be honest. So, you can see our cloud usage right here. And you get an hourly and a weekly usage. So, you can see I've not even used 1% of my weekly usage. And I use a llama almost every single day. So, pretty powerful stuff right there. You can also search loads of different models or you can just flick through here and then decide which one you want to use directly with claw code. And what I've actually created is a framework called the free AI agent stack, right? Because most people think you need like expensive AI subscriptions to build automations, but they're wrong, right? Because with Olama's new anthropic API capability, you can run the same powerful AI agents that businesses spend a lot on, but you can run them for free on your own computer using claw code, right? And until now, if you wanted to use claw code, which is one of the most powerful AI coding tools, you needed a claude subscription. But now you can run it with free open source models locally on your machine which is pretty powerful. I've also created a 30-day road map on exactly how to use this stuff. So if you want to learn okay what can you build, how can you build, what are some of the best use cases, how could you build, for example like a customer support generator or for example a competitor analysis tool, maybe a video generator. I've got loads of ideas inside the 30-day road map of the AI success lab. Link in the comments description. Now let's run through some
7:16

Empowering Beliefs and Practical Applications

empowering beliefs that you need to win with this stuff. Right. So the old belief for example was like AI coding tools are too expensive. And this is what most solarpreneurs think, right? They think they need to buy monthly subscriptions for AI coding tools. Some people say they can't afford to use Claude Pro or that sort of thing and they're stuck doing everything manually. But that's the old way. Here's the truth with Llama's new anthropic API capability, right? You can run the exact same AI coding agents free on your computer, right? Think about this. Imagine you're like, for example, training for Muay Thai. You could pay for an expensive gym membership or you could train at home with the same techniques for free. The techniques don't change, the results don't change, but the location changes, right? And that's the same thing with this. It's like the AI models are just as powerful. They run locally, no monthly fees, and you don't need to pay for anything. So, the new belief really is like free local AI models give you the same automation power as expensive subscriptions without the recurring fees. Also, some people think you need to be technical to use AI coding tools. And most people look at code and think, I don't understand programming. I'm not technical enough for this. I'll never figure out how to use these tools. That's wrong. Here's what actually happens is like you copy two commands, you paste them into your terminal, you talk to the AI in plain English, and it builds everything for you like you've seen today. Right? So, if we have a look inside the terminal here, we've built out the SEO calculator tool as you can see right here. And we did it all in plain English. We just say build this tool out. Right? So, whatever app or idea or website or game you want to build, you can do directly inside code using these local models. So it's like having a translator who speaks fluent code, right? You speak in English, it writes the code, you get the results, right? And even if you have zero coding experience like me, you can build entire AI automation systems using this, right? You describe what you want, the AI builds it, you're done. So the new belief really is that AI coding tools work through conversation, not programming language. If you can describe what you want, AI can build it. Of people say local AI models are not as good as cloud models. And people hear free local models and assume they must be weaker versions or the good stuff requires subscriptions or you're getting a watered down experience. But it's not really true anymore, right? So you look at for example like models like quen 3 coda, you've got deepseek, you've got for example GPT OSS, you've also got for example Miniax, right? Directly inside O Lama. These models are way more powerful. So really the new belief here is like modern local AI models match these APIs performance whilst giving you privacy speed and more usage. So feel
9:33

Community and Resources

free to get all the video notes from today inside the AI success lab. This is a free community that connects you with 46,500 people like you can see. And if you go inside the classroom here and then you go to this section, you can find the video notes from today, including the free AI agent stack framework, the 30-day road map, the limiting beliefs that I've covered today, and then also over 100 prompts that you can use to build and test this stuff out. Right now, if you haven't, check out the AI profit boardroom. This is an amazing community where we win, we learn, and we grow together. And everyone inside here is really scaling their business and growing with AI automation. So, if you want to save time, automate your business, if you want to learn AI automation and how to apply it in a practical way, then feel free to get the AI profit boardroom. Comes to an amazing community of 2,100 people. You can see it's not just me that's winning with this stuff. So, for example, like Steven Simpson just posted, I've worked a lot in miniax and it's been great so far. Loads of people getting great results for Miniax actually. And you can see, for example, people just sharing what they're doing. So, for example, Steve is crushing his 30-day YouTube challenge. You've got, for example, Mike creating his first custom GPT. Like, loads of people who have never used AI before are absolutely crushing it with AI automation. Inside here, inside the calendar, you'll get four weekly coaching calls a week. So, where you can get live support, help, and ask any questions live on a call. If you can't make them, you can watch back the coaching calls inside the classroom right here. And if you want to learn AI automation from scratch, we actually have this complete beginner's course that takes you from beginner to expert with this stuff and also how to build your first AI agent in under 5 minutes. On top of that, you get all my best playbooks, for example, for AI automation, for avatar videos, for automating social media newsletters, shorts, etc. Inside the classroom as well, you get a course on how to get more clients. You'll learn how to do AI SEO as well. And pretty much anything you want to learn about AI automation is inside here. Plus, you got the archive where you can search for anything specific. So, if you wanted to learn, for example, about notebookm or anti-gravity, you type it in the search bar here and you can see our video tutorials on exactly how to use this stuff. So, feel free to get that. That's available link in the comments description or go to the aiprofitboardroom. com.
11:43

Q&A and Final Thoughts

Am I using a free plan on OAMA? Yes, I am. Can we use N10 with this? I mean, you can use N10 with OAM, right? So if you want to use a hama with NA10, you can with claw code. I don't know how that would work like or what would be the benefits of that to be honest. But yeah, if you just want to use NA10 with a local model, like no problem. just use NA10 locally and then configure it

Ещё от Julian Goldie SEO

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться