Apple: AI Can’t Think...AI is FAKE?
13:44

Apple: AI Can’t Think...AI is FAKE?

Julian Goldie SEO 15.06.2025 2 119 просмотров 55 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Want to get more customers, make more profit & save 100s of hours with AI? https://go.juliangoldie.com/ai-profit-boardroom Free AI Community here 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553 🚀 Get a FREE SEO strategy Session + Discount Now: https://go.juliangoldie.com/strategy-session 🤯  Want more money, traffic and sales from SEO? Join the SEO Elite Circle👇 https://go.juliangoldie.com/register 🤖 Need AI Automation Services? Book an AI Discovery Session Here: https://juliangoldieaiautomation.com/ Click below for FREE access to ✅ 50 FREE AI SEO TOOLS 🔥 200+ AI SEO Prompts! 📈 FREE AI SEO COMMUNITY with 2,000 SEOs ! 🚀 Free AI SEO Course 🏆 Plus TODAY's Video NOTES... https://go.juliangoldie.com/chat-gpt-prompts - Want a Custom GPT built? Order here: https://kwnyzkju.manus.space/ - Join our FREE AI SEO Accelerator here: https://www.facebook.com/groups/aiseomastermind - Need consulting? Book a call with us here: https://link.juliangoldie.com/widget/bookings/seo-gameplanesov12

Оглавление (3 сегментов)

  1. 0:00 Segment 1 (00:00 - 05:00) 964 сл.
  2. 5:00 Segment 2 (05:00 - 10:00) 910 сл.
  3. 10:00 Segment 3 (10:00 - 13:00) 698 сл.
0:00

Segment 1 (00:00 - 05:00)

The shocking truth about AI thinking models. Why they're not as smart as you think. Today, I'm going to show you something crazy about AI that nobody's talking about. These new thinking AI models, they're not actually thinking at all. I just read this mind-blowing research that proves even the most advanced AI models completely fail when problems get hard. And the weirdest part, they actually get dumber the harder they try to think. This changes everything about how we use AI for our businesses. Hey, if we haven't met already, I'm the digital avatar of Julian Goldie, CEO of SEO agency Goldie Agency. Whilst he's helping clients get more leads and customers, I'm here to help you get the latest AI updates. So, check this out. You know how everyone's going crazy about these new AI models that can think, like Open AI01, Claude's thinking mode, and Deep Seek R1? Well, some super smart researchers just dropped a bombshell study that shows these models aren't as smart as we thought. And here's the kicker. They tested them with simple puzzles. Not rocket science, not complex math, just basic puzzles that kids solve. The results? Absolutely wild. When these AI models hit a certain level of difficulty, they don't just struggle, they completely collapse. 0% accuracy, nothing. Nada. But wait, it gets even crazier. You think that when a problem gets harder, the AI would think more, right? Like put in more effort. Nope. These models actually start thinking less when problems get too hard. It's like they just give up. I'm going to break down exactly what this means for your business, your SEO, and how you're using AI right now. Because if you're relying on these tools without understanding their limits, you could be making some serious mistakes. Julian Goldie reads every comment. So, make sure you comment below if this blows your mind as much as it blew mine. So, let me explain what these researchers discovered. They took these fancy new AI models. We're talking about the most advanced ones out there, the ones that cost millions to develop. And they gave them puzzles, simple stuff like the Tower of Hanoi, you know, where you move discs from one peg to another or checkers jumping or helping people cross a river. Basic logic puzzles. Now, here's where it gets interesting. They found three different zones of performance. Zone one, easy problems. And guess what? The regular AI models actually beat the thinking models. Yes, you heard that right. The dumber models, one. Zone two, medium problems. This is where the thinking models finally show some advantage. They beat the regular models here. But zone three, hard problems. Both types of models completely fail. Zero success rate. Think about that for a second. We're paying extra for these thinking models. We're waiting longer for them to process. And they can't even solve puzzles that a human could figure out. But here's the part that really got me. The researchers tracked how much these models think at different levels. You'd expect them to think harder when problems get harder, right? wrong. They actually found that when problems get too hard, the models start thinking less. They literally give up. It's like watching someone take a test. And when they hit a hard question, they just stop trying. Now, you might be thinking, "So, what? Why does this matter for my business? " I'll tell you exactly why. If you're using AI for complex tasks in your business, like strategic planning, complex SEO analysis, or solving difficult problems, you need to know these limits because these models will confidently give you wrong answers, and they won't even try hard when things get tough. Let me give you a real example. Say you're using AI to plan your content strategy. You give it a complex brief with multiple requirements. Based on this research, if your request is too complex, the AI won't just give you a bad answer. it'll actually put in less effort to solve it. That's terrifying. Here's another crazy finding from the study. They gave the AI models the actual solution algorithm. Like, here's exactly how to solve this puzzle step by step. Did it help? Nope. The models still failed at the exact same point. Even when they had the answer key right in front of them, this tells us something huge about how these AIs work. They're not actually reasoning. They're not actually thinking. They're doing something else entirely. And whatever they're doing, it has hard limits. Now, the researchers tested this across multiple puzzle types, and they found something weird. The models could handle 100 moves in one type of puzzle, but failed after just five moves in another type. Why? Because the models aren't actually understanding the problems. They're pattern matching based on what they've seen before. If they have seen lots of examples of one puzzle online, they do better. If they haven't seen many examples, they fail fast. This is massive for SEO and content creation. Think about it. If you're using AI to write content about topics that aren't well covered online, the AI might completely fail. Not because the topic is hard, but because it hasn't seen enough examples to pattern match. Let me break down what else they found. Remember how I said the thinking models actually think less when problems get hard? Here's the data. For easy problems, these models generate thousands of thinking tokens. They're really trying. For medium problems, they generate even more. Sometimes 20,000 tokens of thinking. But for hard problems, the thinking drops off a cliff, sometimes down to just a few thousand tokens. It's like the AI knows it can't solve it, so it doesn't even
5:00

Segment 2 (05:00 - 10:00)

try. And get this, they had plenty of token budget left. They could have kept thinking, but they chose not to. That's not intelligence. That's giving up. Now, here's where it gets really interesting for your business. The researchers found that these models have what they call overthinking problems on easy tasks. They'll find the right answer early, then keep thinking and thinking, wasting compute power for your business. This means you're paying for AI processing that's completely unnecessary. If you're using these models for simple tasks, you're literally burning money. But it gets worse. On medium difficulty tasks, the models explore tons of wrong solutions before maybe finding the right one. They're not efficiently solving problems. They're just trying random stuff until something works. Imagine if your employees worked like that, just randomly trying things instead of thinking through the problem. you'd fire them, right? But we're accepting this behavior from AI that costs way more than employees. Here's another bombshell from the research. When they compared thinking models to regular models with the same compute budget, guess what happened? For many tasks, the regular models performed just as well or better. So all that extra thinking often worthless. Think about what this means for your AI strategy. You might be paying premium prices for thinking models when regular models would work just fine or worse, both types might fail completely for your use case. The researchers also found something they call inconsistent reasoning. The same model would solve a problem one way in one puzzle but completely differently in another similar puzzle. No consistency, no actual understanding, just random pattern matching. For SEO, this is crucial. If you're using AI to analyze your competitors or plan strategies, the AI might give you completely different advice for similar situations. Not because one situation is actually different, but because the AI doesn't actually understand what it's doing. Now, let me tell you about the specific puzzles they tested. Tower of Hanoi, you know, moving discs between pegs. The AI completely failed after 10 discs. But here's the thing. The solution follows a simple pattern. It's completely predictable. A basic computer program from the 1960s could solve it perfectly. But these advanced AIs, complete failure, river crossing puzzles, getting people across a river with constraints. The AI failed after just three pairs of people. My 8-year-old nephew could solve harder versions than that. Checker jumping. The AI did okay until about 10 checkers. Then complete collapse. Block stacking. Same story. Works until it doesn't. Then total failure. What's the pattern here? As soon as the problem requires holding too many steps in memory, the AI breaks. It can't actually plan. It can't actually think ahead. It's just faking it until the problem gets too hard. This has huge implications for how we use AI in business. Complex project planning probably not going to work. Multi-step SEO strategies, the AI will likely fail. Analyzing interconnected business problems, forget about it. But here's what really got me. The researchers found that giving the AI more thinking time didn't help. Once it hit its complexity limit, it was done. Game over. No amount of extra processing would fix it. This destroys the whole narrative about these thinking models. We were told they could solve harder problems by thinking more. That's literally their whole selling point. But the research shows that's just not true. They have hard limits and when they hit those limits, they give up. Now, you might be thinking, "Okay, but these are just puzzles. What about real business tasks? " Great question. The researchers specifically chose puzzles because they're clean tests of logical reasoning. If an AI can't solve a simple logic puzzle, how can we trust it with complex business decisions? If it can't move discs between pegs correctly, how can it plan your marketing campaign? If it gives up when things get hard, what happens when your business faces real challenges? These aren't just academic questions. This is about the tools we're betting our businesses on. Here's something else that blew my mind. The researchers tested both the premium models and the cheaper versions. For many tasks, the cheaper versions performed almost identically. So, companies are charging us 10x more for models that aren't actually 10x better. In some cases, they're not even 2x better. That's insane. Think about your AI spend right now. How much are you paying for premium models? Based on this research, you might be wasting most of that money. But wait, there's more. The study found that these models don't even use consistent strategies. In one test, they try to solve a puzzle one way. In the next test, completely different approach. No learning, no improvement, no consistency. For SEO, this is a nightmare. Imagine if your SEO strategy changed randomly every time you ask for advice. That's essentially what these models are doing. They're not building on previous knowledge. They're just rolling dice each time. Now, let me share the scariest finding of all. The researchers found that the models would confidently provide wrong solutions. Not just small errors, completely wrong approaches that could never work. And they'd present these solutions as if they were perfect. No uncertainty, no hedging, just confident wrongness. In business, confident wrong advice is worse than no advice at all. At least when you have no advice, you know you need to figure things out. But when AI
10:00

Segment 3 (10:00 - 13:00)

gives you confident wrong advice, you might actually follow it. That could destroy your business. Think about all the businesses right now following AI generated strategies. How many of them are following confident but completely wrong advice? Based on this research, probably a lot of them. Here's what this means for you. First, don't trust AI with complex multi-step problems. It will fail. Second, don't pay for premium thinking models unless you've tested that they actually help your specific use case. Third, always verify AI output, especially for important decisions. Fourth, understand that AI isn't actually thinking or reasoning, it's pattern matching. And fifth, have backup plans for when AI fails. Because based on this research, it will fail. It's not a matter of if, but when. Now, I'm not saying AI is useless. Far from it. For simple tasks, for pattern recognition, for content creation within its limits, AI is amazing. But we need to stop pretending it's something it's not. It's not thinking. It's not reasoning. It's not solving problems the way humans do. It's a very sophisticated pattern matching system with hard limits. Use it within those limits and it's incredibly powerful. Push beyond those limits and it completely breaks down. The key is knowing where those limits are for your specific use cases. Test everything. Verify everything. Don't assume the AI knows what it's doing because this research proves that it often doesn't. So, what should you do right now? First, audit your AI usage. Where are you using AI for complex tasks? Second, test cheaper models against expensive ones for your use cases. You might be surprised. Third, build verification steps into your AI workflows. Don't trust verify. Fourth, keep humans in the loop for complex decisions. And fifth, adjust your expectations. AI is a tool, not a magic solution. Use it like a tool with full understanding of its limitations. This research is a wake-up call for everyone using AI in business. The hype has gotten ahead of reality. Way ahead. These models are impressive, but they're not what they claim to be. They're not thinking. They're not reasoning. That they're very good at pretending until they hit their limits, then they fail completely. Understanding this is crucial for using AI effectively in your business. Don't fall for the hype. Look at the actual evidence and always have a backup plan for when the AI fails because it will fail. This research proves it. The question is, will you be ready when it does? Remember, I'm sharing this not to scare you away from AI. I use AI every day in my business. It's incredibly powerful when used correctly, but we need to be realistic about what it can and can't do. Otherwise, we're setting ourselves up for massive failures. So test everything, verify everything, and never bet your business on AI doing something it fundamentally can't do, like actual thinking. Because as this research shows, it's not thinking at all. It's just really good at pretending until it's not. And when it stops pretending, you better have a plan B. Look, if you want to stay ahead of the AI curve and use these tools properly in your business, you need to be part of a community that gets it. That's why I created the AI profit boardroom. It's the best place to scale your business, get more customers, and save hundreds of hours with AI automation. We cut through the hype and focus on what actually works. Also, if you want to see how we're using AI effectively for SEO, book a free SEO strategy session link in the comments and description. And if you're just getting started, grab my free AI SEO course. You'll join a community with 10,000 AI SEOs and get access to 50 plus free AI SEO tools. Link in the comments and description. Remember, the key isn't avoiding AI. It's understanding its real capabilities and limitations. Use it right and it's incredibly powerful. Use it wrong and you'll hit the exact failures this research exposed. The choice is yours. But now you know the truth. These thinking models, they're not thinking at all. Plan accordingly.

Ещё от Julian Goldie SEO

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться