This NEW Claude Update is INSANE! 🤯
9:43

This NEW Claude Update is INSANE! 🤯

Julian Goldie SEO 14.08.2025 3 864 просмотров 81 лайков обн. 18.02.2026

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Want to get more customers, make more money & save 100s of hours with AI? Join me in the AI Profit Boardroom: https://juliangoldieai.com/0oOTIs Get a FREE AI Automation Session 👉 https://juliangoldieai.com/70E3Gs

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

Breaking news. The new Clawude 4 Sonet update is absolutely insane. And what you've actually got now is 1 million tokens of context, which is about time because the number one complaint that I've heard about Clawude force on it when it comes to AI. I've seen some people complain about it in our communities like the AI profit bordering for example. The number one complaint that I see is that it didn't have enough context window. So now what you can actually see is that prompts you can actually run above 200k and you can use 1 million tokens of context. Now this right now is purely on the API but claude have said that a lot of updates are coming later but essentially as we can see inside the update here. So, this was literally just announced a couple of days ago. They said, "Claw for Sonic now supports up to 1 million tokens of context on the anthropic API, a 5x increase that lets you process entire code bases with over 75,000 lines of code or dozens of research papers in a single request. " Right now, if you're going to use this, don't use it inside Claude directly here, right? Because it's not for the chat. Actually, there is a new update today. So, we'll come on to that in a second as well. That's interesting. But we've got for essential essentially if you want to use this don't use it inside the chat and don't use it inside open router. From what I can see inside open router I don't think last time I checked you can get the 1 million token window there. So the way you want to do this is you go to anthropic anthropic. com then you're going to click on console. anthropic. com from here. You're going to log in and once you've logged in you'll be able to use the API. You can then get an API key from here and you can use the 1 million token window. I think they also have if you want to use this inside the chat. Let's have a look here. So we can use claw force on it here as well directly inside the chat and then you can increase the max tokens right there. But essentially if you want to use the we've also got tools for example like web search too. But yeah so you can use the API now with 1 million tokens. They've also said long context support for Sonic 4 is now in public beta on the anthropic API and in Amazon Bedrock with Google Cloud's Vertex AI coming soon. Basically, what does this mean for you? Longer context means more use cases, right? So, you can use it for more things. And Bevves says, "Will Claude put rate limit on $200 plan? " I think it's coming. I think the context window will come to the chat as well, but right now it's just on the API. And I think the reason for that is because it's going to be more expensive for them, right? If people can have larger context windows, then it's going to cost more money to process all the responses and it's going to take a lot more power on their servers. So they've said with longer context developers can run more comprehensively data intensive. So for example, like large scale code analysis. So you could load like much bigger projects, much bigger architecture inside an API document synthesis. So for example, if you've got a big research paper or you've got like something really technical inside a paper, you could load that directly here. And then also contextaware agents, right? So one of the most frustrating things about a small context window is that when you go back and forth in the chat, you literally by the time you've strained it up and everything that it needs to know, it's run out of context. You have to start a new one. It's not great. So this is one of the better ways to do it and also that way it just forgets less in the chat as well. So, if you want to have a look at the API pricing, this is quite interesting actually. So, the way they've priced it, and it's a bit brutal, is if it's above 200k token window, which is what it previously was, right? Then it's $22 per million output tokens. If you have less than 200K, then the input is $3 per million token and output is $15. So you can see that it's a lot more expensive if you're running prompts that have a context window of over 200k. And they've also said when combined with prompt caching, users can reduce latency and costs with a long context window. And then 1 million context window can also be used with batch processing for an additional 50% cost savings. So there's a couple ways around this, a bit more technical, but you can use batch processing. And then also the other option is you can use prompt caching as well. If you want to get all the resources from this, let me just add this to the AI profit boardroom link in the comments description if you want to get access and I'll just put all the video notes inside there directly for you. All right. Now, I've actually created a full guide to this as well. So, if you want to learn more about this stuff and how it works, let me share this full guide with you inside the profit port. So, if you want to see a bit more in detail and I'll run you through it in a second as well, but essentially if you want to learn like how to do this, what it means, etc., it's all inside the AI profit board. The other cool thing here as well is if you're building agents across other platforms with the API for example like maybe using NAM or using maid. com that's going to have a bigger thing as well. And then Daniel says, "Is this really you? I am so used to your avatar. It's weird when I meet people in real but in real life they're like I don't know if

Segment 2 (05:00 - 09:00)

it's you or your avatar, mate. " And they get a bit freaked out. Big fan kunow says, "Big fan, could you talk about building a lead genen workflow for getting people to start a fundraiser for social cause? I want to help out NOS's. Yeah. So actually if you want to get training on that, we get 300 leads a day, 200 to 300 leads a day in the agency that we have and that's all through the AI workflows that we have and I would recommend that you check out the crash course inside the AI profit boardroom. It shows you for example some of the automations we're using across all our platforms like newsletters, shorts, AI avatar videos, etc. I'll give you an example, right? So this absolutely blew my mind. It's a crazy little tip here. If you actually create a newsletter with AI, and you can use Claude 4 for this if you wanted to. Claude Forson is actually very good for writing. It's very sentient. It's one of the best things you can use. And if we go to newsletters here, I literally started this newsletter yesterday. It's only got one edition that I published 22 hours ago. 2,200 new subscribers. Now, if you reach 2,200 new people every day, are you going to make more money? get more sponsorships for your fundraising? And are you going to reach the right people? Right? B2B audience, they're quite often interested in fundraising. So that's one of the best places you can go if you want to learn all about that, how to do it, etc. It's all inside the AI profit boardroom. Just go to the classroom here and then go to the crash course. Boom. Shakalaka. It's all right there. Right. Beves says it's GLM has Android app. I'm not sure about that, but what you can do is you can just go to you can go there's many ways actually. I created a video on it yesterday on GLM and it is let's have a look here. It's this one right here. You just want to check that out if you want to learn about GLM cuz we got loads of good training on that. right there. So, let's get back to Claude Forson. This is interesting. So, B new an awesome tool for code generation, right? And so, they've said that Claude Sonet is our go-to model for code generation workflows and it's using the 1 million context window, right? So, you can actually build with this. If you don't even want to use the API, you want to start building with this, you can actually go inside bold new and then start coding stuff out. So, let's just sign in right here and then I'll show you how to do it. So, bolt new is a really good tool for no coding and that sort of thing. So you can go inside here then bolt. You can see some of my previous creations actually on the left hand side over here and then inside here you can say okay build out a AI SAS tool for SEO something like that right let's get rid of that and then it's going to start going off and coding that right and because you got a longer context window with claw force on it the 1 million context window then you can build something out that's much bigger without running out of tokens and that sort of thing. All right. So, you can see it's coding out here. So, on the left hand side, it's just working its magic. And then in the coding section, it's building this all out. And then once that's done, it will arrive in the preview. We can come back to that in a second. See what it looks like. So, another one is IT AI. They're using it. And if you want to get started, it's in public beta on the anthropic API for customers with tier 4 and custom rate limits with broader availability rolling out soon. You can also get it inside Amazon Bedrock and it's coming to Vertex as well soon. All right. So if you want to use Amazon Bedrock, you can just sign up here and this is basically another way to get access to all like the foundational models with AI directly. Now if you want to start using it, you can actually use this Python code right here in your API requests. You can see the pricing in terms of how much it costs. Obviously prompt caching is a lot cheaper than using like the typical sort of outputs. And then if we go back to bold dot, even if you don't have tier 4 on API, then you can use bold. Instead, right? So you can just go directly inside here and start using it or you can use Amazon Bedrock as well. So that is basically it. Thanks so much for watching. Appreciate as always and if you want to connect with me personally feel free to join the AR profit boardroom. This is an awesome community where we're just helping each other sharing loads of cool stuff. Very active community lots of members. Lots of cool stuff inside the classroom here including a six week AR automation masterass AI automation crash course. All my best lead generation workflows 10 templates. A lot of people ask me like how do you generate AI avatar videos as well on YouTube? And the way that we do that is we use a workflow that's inside here inside the social media video automation. If you want to get clients, we've got the agency course. And then if you want to grow your YouTube channel, we've got a six week master class based on what's working for me, right? All this stuff I test. We also do five training calls a week like you can see. And additionally, the price is going up. So only got 10 more spots left before the price goes up. So make sure you sign up now before you miss out. And then additionally, if you want us to just do everything for you, feel free to book in a free AI strategy session on this call right here. And Helen says, "Board room is the best. " Appreciate you, Helen. Thanks so much. Kun now says, "Thank you, real Julian. " I like that I'm called Real Julian now. That is hilarious. But yeah, lots of cool stuff going on. All right, cheers. I will see you on the next one.

Другие видео автора — Julian Goldie SEO

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник