This NEW AI AGENT is INSANE! 🤯
8:19

This NEW AI AGENT is INSANE! 🤯

Julian Goldie SEO 23.01.2026 1 520 просмотров 41 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Want to make money and save time with AI? Get AI Coaching, Support & Courses 👉 https://www.skool.com/ai-profit-lab-7462/about Get a FREE AI Course + 1000 NEW AI Agents + Video Notes 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553/about Want to know how I make videos like these? Join the AI Profit Boardroom → https://www.skool.com/ai-profit-lab-7462/about Get a FREE AI SEO Strategy Session: https://go.juliangoldie.com/strategy-session?utm=julian Sponsorship inquiries:  https://docs.google.com/document/d/1EgcoLtqJFF9s9MfJ2OtWzUe0UyKu1WeIryMiA_cs7AU/edit?tab=t.0 This New Reasoning AI Runs Offline on Your Phone (LFM 2.5) Discover LFM 2.5 1.2b, a revolutionary AI model that brings deep reasoning to your mobile device with zero internet required. Learn how to automate your business with private, lag-free AI that outperforms models twice its size. 00:00 - 00:00 - Intro 01:03 - What is LFM 2.5 1.2b? 01:43 - How Thinking Models Work 02:24 - Benchmarking vs. Larger Models 03:36 - How to Run LFM Locally 04:14 - Business Automation Workflows 07:01 - Next Steps for AI Automation

Оглавление (7 сегментов)

  1. 0:00 Intro 196 сл.
  2. 1:03 What is LFM 2.5 1.2b? 118 сл.
  3. 1:43 How Thinking Models Work 136 сл.
  4. 2:24 Benchmarking vs. Larger Models 210 сл.
  5. 3:36 How to Run LFM Locally 112 сл.
  6. 4:14 Business Automation Workflows 478 сл.
  7. 7:01 Next Steps for AI Automation 272 сл.
0:00

Intro

This new AI agent runs on your phone with zero internet. It thinks step by step like chat GPT, but fits in under one gigabyte. No cloud, no lag, total privacy, and it crushes math problems, follows instructions, and uses tools better than models twice its size. This changes everything for anyone running AI automation in their business. What needed a data center 2 years ago now runs on any phone with 900 megabytes of memory. You can build AI agents that work completely offline. Your customers get instant responses with zero API costs and the reasoning quality matches models that are way bigger and way more expensive. This is the future of ondevice AI and it just became available to everyone. Let me show you why this is a massive deal and how you can start using it today to automate your business workflows. Hey, if we haven't met already, I'm the digital avatar of Julian Goldie, CEO of SEO agency Goldie Agency. Whilst he's helping clients get more leads and customers, I'm here to help you get the latest AI updates. Julian Goldie reads every comment, so make sure you comment below. All right
1:03

What is LFM 2.5 1.2b?

let's talk about LFM 2. 5 1. 2b thinking. This thing just dropped from liquid AI and it's genuinely insane. Here's why. 2 years ago, if you wanted an AI model that could reason through problems step by step, you needed a data center. Today, you can run this exact same capability on your phone with 900 megabytes of memory. That's smaller than most apps on your phone right now. Let me say that again. This AI model thinks like the big reasoning models, but it runs entirely offline on a device you already own. No internet required, no cloud costs, no waiting for API responses, just pure instant private AI reasoning in your pocket. Now, before I
1:43

How Thinking Models Work

go deeper, let me explain what makes this different from every other AI model you've seen. Most AI models just spit out answers. They don't show their work. LFM2. 5 1. 2b thinking generates internal thinking traces before it answers your question. That means it reasons through the problem step by step just like a human would. It doesn't guess. It thinks and you can see exactly how it got to the answer. This is huge for anyone using AI to automate their business because now you can trust the outputs. You're not just getting random responses. you're getting structured, logical reasoning that you can verify when you're building automation workflows. You need reliability. You need to know that AI isn't just making stuff up. And this model shows you its entire thought process. Now, here's
2:24

Benchmarking vs. Larger Models

where it gets even better. This model is only 1. 2 billion parameters. That sounds small compared to the massive models like GPT4, right? But here's the kicker. On benchmarks like math 500, it scores around 88%. Quen 3 1. 7B which is bigger scores around 85. On instruction following tasks, LFM scores 69 versus Quen 60. On tool use, it's 57 versus 55. So even though it's smaller, it performs better on the tasks that actually matter for real world use. Math, instructions, tool use, those are the things you need when you're automating workflows, building AI agents, or creating smart assistants. And it does all of this while running on your phone. Let me give you a real example of why this matters. Let's say you want to create an AI agent that helps automate content creation for your business. Normally, you'd need to connect to Chat GPT's API, wait for responses, pay per request, and hope the internet connection is stable. With LFM 2. 5. 2b thinking, you can build that entire agent locally. No cloud, no cost per query, no latency. You could run the agent on your own device, get instant answers, and never worry about privacy or data leaks. Now, let me talk about
3:36

How to Run LFM Locally

where you can actually use this thing. It's available on hugging face right now. You can download the weights, run it locally, test it out immediately. There's also support through which makes it super easy to run on your desktop or laptop. You just type run LFM2. 5 thinking and you're off to the races. But here's what's really exciting. This model has ecosystem support across basically every major hardware platform. Qualcomm for mobile acceleration, AMD optimized runtimes, Apple silicon support, Nvidia GPUs. It works on CPUs, GPUs, and MPUs. That means no matter what device you're using, there's probably an optimized version that will run efficiently. Let me show you how
4:14

Business Automation Workflows

you'd actually use this in a real workflow. Let's say you're running the AI Profit Boardroom and you want to automate email responses for customer inquiries. You could set up LFM 2. 5 for 1. 2b, 2B thinking to read incoming questions, reason through the best response based on your documentation and draft replies that you can review and send. All of this happens locally on your machine. No data leaves your device. You maintain complete control and privacy. Or let's say you want to create a tool that helps members of the AI profit boardroom brainstorm content ideas. You could build a local AI agent that takes their niche, their target audience, and their goals, then generates content strategies with step-by-step reasoning for why each idea would work. Because the model shows its thinking, your members can see exactly why the AI suggested each approach. They're not just getting random ideas. They're getting strategic recommendations with clear logic behind them. Now, speaking of the AI profit boardroom, if you're watching this and you want to learn how to automate your business with AI tools like LFM 2. 5 1. 2b thinking, you need to check it out. We teach you how to use cuttingedge AI to save time, automate repetitive tasks, and build systems that work while you sleep. You'll learn the exact workflows, prompts, and strategies to implement tools like this one into your business. Link is in the description and comments below. All right, back to LFM2. 51 1. 2b thinking. The performance benchmarks are genuinely impressive. On the math 500 benchmark, which tests mathematical reasoning, this model hits around 88% accuracy, that's better than models that are significantly larger. On MultiF, which tests instruction following, it scores 69%. And on BFCLV3, which tests tool use, it gets 57%. These aren't just abstract numbers. If you're building an AI agent that needs to follow complex instructions, you need high scores on multi if you're automating tasks that require the AI to use tools, you need good performance on BFCL. And if you're doing anything with math or logical reasoning, you need strong math scores. What's wild is that this model achieves these scores while being small enough to run on a phone. Most models that perform this well are massive and require powerful cloud infrastructure. LFM 2. 5, 1. 2b, to be thinking breaks that pattern completely. Now, let me talk about practical deployment. If you're building mobile apps, you could embed this model directly into your app. Your users would have a fully functional AI assistant that works offline. No server costs, no latency, just instant AI responses built right into the app. If you're creating educational tools, you could build a math tutor that runs on tablets. students could work through problems and get step-by-step explanations without needing internet access. So, here's what
7:01

Next Steps for AI Automation

I want you to do. First, go check out the AI Profit Boardroom. Learn how to implement AI automation in your business using tools like this one. You'll get access to workflows, templates, prompts, and a community of people who are actually doing this stuff every day. We'll show you exactly how to use AI to save time and build systems that scale. Link is in the description and comments. And if you want the full process, SOPs, and 100 plus AI use cases like this one, join the AI success lab. It's our free AI community. You'll get all the video notes from there, plus access to our community of 40,000 members who are crushing it with AI. Links in the comments and description. Second, go download LFM 2. 5 1. 2b thinking, and try it out. It takes like 5 minutes to get it running. Test it with your own use cases. See where it fits into your workflow and then start building. This is one of the most exciting AI releases I've seen in a while because it's actually practical. It's not some theoretical breakthrough that will be available in 3 years. It's here now. You can use it today and it opens up real opportunities for anyone who wants to automate their business with AI. That's it for this video. If you found this helpful, drop a comment below and let me know what you're going to build with this model. Julian reads every single comment and make sure you subscribe because I'm covering all the latest AI updates as they drop. I'll see you in the next

Ещё от Julian Goldie SEO

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться