🚨 BREAKING: NEW Llama 4 Update (FREE!)
16:44

🚨 BREAKING: NEW Llama 4 Update (FREE!)

Julian Goldie SEO 06.04.2025 10 205 просмотров 152 лайков обн. 18.02.2026

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
🚀 Get a FREE SEO strategy Session + Discount Now: https://go.juliangoldie.com/strategy-session Want to get more customers, make more profit & save 100s of hours with AI? Join me in the AI Profit Boardroom: https://go.juliangoldie.com/ai-profit-boardroom 🤯 Want more money, traffic and sales from SEO? Join the SEO Elite Circle👇 https://go.juliangoldie.com/register 🤖 Need AI Automation Services? Book an AI Discovery Session Here: https://juliangoldieaiautomation.com/ Click below for FREE access to ✅ 50 FREE AI SEO TOOLS 🔥 200+ AI SEO Prompts! 📈 FREE AI SEO COMMUNITY with 2,000 SEOs ! 🚀 Free AI SEO Course 🏆 Plus TODAY's Video NOTES... https://go.juliangoldie.com/chat-gpt-prompts - Want a Custom GPT built? Order here: https://kwnyzkju.manus.space/ - Join our FREE AI SEO Accelerator here: https://www.facebook.com/groups/aiseomastermind - Need consulting? Book a call with us here: https://link.juliangoldie.com/widget/bookings/seo-gameplanesov12 Exploring Llama 4: New Update and Benchmarks In this episode, we dive into the newly released Llama 4 update with a detailed look at its three models: Llama 4 Maverick and Llama 4 Scout. We'll test their capabilities, compare benchmarks, and see how they stack up against other models like Gemini 2.0 and GPT-4. See in-depth comparisons, performance tests, and how to get started with Llama 4 on multiple platforms like huggingface and Groq. Don't miss out on this comprehensive review and performance analysis of Llama 4 to determine if it's the right tool for your AI needs. 00:00 Introduction to Llama Four 00:08 Overview of Llama Four Models 00:45 Benchmark Comparisons 02:16 Accessing Llama Four 03:57 Testing Llama Four for Content Creation 05:46 Reasoning Challenge with Llama Four 07:22 Speed Comparison: Grok vs Open Router 08:36 Using Llama Four in Visual Studio Code 10:56 Creating Games with Llama Four 12:24 Meta's Announcement and Performance 15:25 Conclusion and Community Invitation

Оглавление (11 сегментов)

  1. 0:00 Introduction to Llama Four 28 сл.
  2. 0:08 Overview of Llama Four Models 110 сл.
  3. 0:45 Benchmark Comparisons 279 сл.
  4. 2:16 Accessing Llama Four 330 сл.
  5. 3:57 Testing Llama Four for Content Creation 323 сл.
  6. 5:46 Reasoning Challenge with Llama Four 329 сл.
  7. 7:22 Speed Comparison: Grok vs Open Router 230 сл.
  8. 8:36 Using Llama Four in Visual Studio Code 422 сл.
  9. 10:56 Creating Games with Llama Four 281 сл.
  10. 12:24 Meta's Announcement and Performance 563 сл.
  11. 15:25 Conclusion and Community Invitation 292 сл.
0:00

Introduction to Llama Four

Llama 4 has just been released. Brand new update and it looks absolutely insane. You can see it just came out a few hours ago here. There's three
0:08

Overview of Llama Four Models

different models here. So, you've got Llama for Maverick and Llama 4 Scout. So, we're going to be testing them out today and I'll be showing you what the differences are, how they work, etc. One of the biggest differences right here is that Llama 4 actually has a 10 million token limit. It is absolutely insane. And you can see a leading context window of 10 million. Can't wait to try this out. It's available on llama. com and hugging face. You can also check this out on, for example, Grock. I'll be coming on to exactly how to get access to that later today. In this video, you
0:45

Benchmark Comparisons

can see some of the key benchmarks here. So, this is Llama 4 Maverick and you can see compared to Gemini 2. 0 Flash, DC 3. 1, and GPT 40. So, those are the comparative models right there. and it's absolutely smashing it on pretty much every single benchmark as you can see right here. Bear in mind, Deepseek 3. 1 is the latest update that just came out about a week ago. All right. And then you can see the Needle and Hastack test right here. So, this is Maverick versus Scout. We've also got NNL. And we have the benchmarks here versus other smaller models. So, Llama for Scout. You can see here the benchmarks in the left on blue. And then if we compare this versus Gemma 3, which is Google's latest light model, Mistral 3. 1, which is another light model in Gemini 2. 10 Flash, and MMU is absolutely destroying it, right? Llama 4 is absolutely flying. All right, so Llama for Scout is what you would try and host locally because it's a smaller model, but it still has a lot of power. All right, and then you've got Llama for Bema. Llama for BMA is supposedly outperforming Claude 3. 7 Sonnet, Gemini 2. 0 O Pro and GPT 4. 5. Bear in mind, this is a new model from Meta. All right, so Mark Zuckerberg is behind the Llama 4 model and it's absolutely flying. One of the things I've always been impressed by with Llama is its ability to just be so fast. It's just redonkulous. All right, but let's come on to that in a second. So, if you want
2:16

Accessing Llama Four

to get access to Llama, you can go on to llama. com and then there's a download section here and you can request access directly there. Also you can get access to hugged face as you can see right here and we can also go to grog gr. com and then from here we can start using it as well right so it's a dev console section which we'll come on to in a second and then also we can switch between the models over here and also we have gro cloud if we go to the dashboard inside the playground we have access to l 4 right so if you want to get access to llama 4. You can go inside here. Let's also check open router. So, we're just going to go inside open router and you can see that llama maverick and scout are available and these are free APIs that you can play with, right? So, you can plug these into your favorite apps. You can see this is very popular inside ruode and also client and all these other models and it's completely free. All right, so it is comparative to other models. For example, if we have a look at the MA Maverick benchmarks here, you can see how it's outperforming, for example, Gemini 2. 0 Flash, DC 3. 1, and GPT40. The difference here, of course, is that GPT40 is a paid model, whereas you can use Maverick the free from open router, and I'll show you exactly how to get access to that and how it performs in a minute. All right, so we've got two different options here. We can go on to Grock inside the dashboard here, or we can go inside open router. And if you go inside open router, you can grab an API key or you can just use it directly inside the chat here. You see how it says Llama Maverick 3. So we can test
3:57

Testing Llama Four for Content Creation

out. And for example, if we plug in some tests, let me grab some examples here. Let's test out for creating content first of all. All right. So we're going to plug this into Llama 4. We can also enable web search if we want to link it directly to a web search. And then the prompt that I'm going to test first of all is just for creating content. All right. So we're going to say create an SEO optimized article for this keyword equals content outline. We're going to delete for content creation do this included some information about me, who I am, etc. And then we'll hit enter and we'll see how that performs. So we're running Llama Maverick 4. And also what we can do in the background is also run this inside Llama Scout. All right. So scout is the lightweight model. We can go inside the chat here. We can plug this in. Again, it is free to use and we'll use exactly the same keyword to create the content. So got the responses right there. And if we go and take that content and then plug it into HTML preview. So we'll plug that in on liveweave. com. We have content right there. Okay. So, you can use it for free. Honestly, that content does not look good at all. I'll be 100% honest with you based on that prompt. But let's check now. We've got Scout as well. So, this is the content from Scout. Let's plug that in. I actually think the content from Scout looks better in Maverick, to be honest. See what we got here. And that content is coming out not bad at all. It's about 719 words. Blast it out super fast. We're good to go on that. So, I actually think the for content creation is not that great, but we'll test out on some other stuff in a
5:46

Reasoning Challenge with Llama Four

second, too. So, what we're going to do next is a reasoning challenge on this, right? So, we're going to say there's a tree on the other side of the river. How can I pick an apple? We'll run that through Maverick and also through Scout. We'll see how it performs for reasoning. Now, what we're looking for here is basically number one, can it reason things? Number two, can it find solutions? Right? Does it understand that during winter there's typically no apple trees or there's no apples on the tree? Right. And you can see here that actually comes back to us straight away with the answer. I also like the fact that it is very straight to the point. All right. So, if you're looking for a creative solution or a story, happy to play along, etc. But in the real world, it's not possible to pick an apple from a tree in the on the other side of a river in winter. Number one, because the tree is likely bare, and number two, the river is a barrier. All right. And let's check out Llama Scout. So, if we just start a new chat here, we'll see how it performs. And there we go. Scout is actually giving us a much better response. And yeah, you actually get better answers if you just start a new response right there. And Scout seems to be a lot more wordier when it's doing responses, right? So, if you compare the answer from scout, this is a lot longer, a lot more in depth, etc. thinks things through a bit better. Whereas, for example, Maverick, it gives us like very short responses like you can see right there. All right, let's go inside the playground as well. So, we can use llama for scout over here and we'll hit submit. Look how fast that is. That is redonulous, isn't it? That was unbelievably fast. Wow. So, actually, if
7:22

Speed Comparison: Grok vs Open Router

you want really speedy results, I would recommend using llama 4 directly inside Brock. It seems to be a lot faster. Let me just show you an example of what I mean right here. So, if we pull up Grock inside the cloud versus open router. All right. And we'll start a new chat here and we'll plug that in and compare it versus Grock. So, we've got Grock over here and we have open router over here and we're just going to make sure we have scout selected. So, let me remove that and let me change this to scout. All right. So, we have scout and scout. These are both four models. We're going to go inside here and say write a 2,000word article about SEO. We'll give Open Routter the head start and it's written out. But look at that. Look at the speed. Grock has already come back to us whilst Open Router is still writing it out. That is absolutely insane. I've never seen an AI respond that quickly. Crazy stuff. It looks like you can also edit the response as well if you want to. But the fact that Grock is just so fast is absolutely amazing. All right, so now what we're going to do is I'm going to show you how to use this directly inside of a tool. So if you
8:36

Using Llama Four in Visual Studio Code

want to, for example, use Visual Studio Code, this is free to download. You can get it at code. visisualstudio. com. And then you can switch between client and also R code. All right, these are my two favorite extensions for coding. Honestly, I prefer Rode in general, but you can test them both out right there. If we go to the extension section and you want to install client or root code, just type in client or type in root code. Hit install. Boom. Shakalaka, you living the dream. All right. So, you can see right there. So, if we go over to root code and then we're going to go to settings and then we can go into open router. Let's just check is Llama 4 available. So, yeah, you can see Llama 4 free and paid, right? So, those are two versions. That's going to be the paid model. free version. Same for scout. Scout and free again. Maverick is not available to test out yet. And then let's just have a look. Can we use Grock inside Recode? Can't see it available in the drop down right there. Let's just check Klein as well. So XAI or Llama, but Grock is not in the drop down right there. Right. So if you want to use for example llama 4 inside client or root code just select open router and then you can switch between scout and maverick. All right. So let's go with scout just for an example. Again scout is the lightweight model. All right. And then we're going to say okay create a self-playing snake game just as a little test here and we'll see how that performs. also using boomerang mode. If you've never set that up, check it out inside the AI profit boardroom for the full instructions on that. That has literally run out of code of API requests straight away, which is not great. All right, so why are we still on? Let's just switch this over. Make sure we got scout selected. Hit save. Hit done. We'll try again on that X off there. There we go. All right, it seems to be working now. It's doing its magic. Just be careful with that. Make sure you selected the right API and everything. Voila la. Now it's going to create a little subtask. It's pretty fast to be fair. And it's planning out the code. We'll see what we get back in a second. All right. In the meantime, whilst we're
10:56

Creating Games with Llama Four

waiting for that, what we can actually do is go inside Grock, inside the playground here. Make sure you got whatever response you want to use. I'm going to just go with scout for now. We're going to say, okay, create a 3JS runner game. I'm actually going to grab one of the prompts from the AI profit boardroom. All right. And I'm going to say, make me a captivating endless runner game. Key instructions on the screen. P5JFC, no HTML. I like pixelated dinosaurs and interesting backgrounds. All right. And then we're going to plug that inside here. We'll hit submit, and we'll see if it can do its magic. All right. The reason that I want to try that inside Grock is it's just going to be a lot faster to test. You can see how quickly it just came back with the answer. So, we'll grab the JS here and just see if it can actually do its magic. All right. So, let's plug that in. We'll hit play. And that does not work. Let's just see if we need to delete that. Yeah, it's not working now, is it? It's nowhere near as good as Gemini 2. 5 Pro. I'll tell you that for free. All right. So, we're going to keep going. Now, it's creating the JS, etc. Wait for that to load. So, we've got the response on the left hand side and then on the right hand side, we've got the code. Now, it's creating the CSS. In the meantime, whilst that's coding out, let me show you some other stuff that people are creating with this. So, you can see AI and Meta
12:24

Meta's Announcement and Performance

announced today is the start of a new era of natively multimodal innovation. Today we're introducing the first Llama 4 models, Scout Maverick. 10 million tokens on Scout, which is absolutely insane for context windows. Looks like you can actually preview. So you can get an early preview of Llama for Baremoth as well if you want to get access to it. What's pretty crazy here is that Meta's Llama for Maverick is hitting number two overall, becoming the fourth organization to break 1,400 plus on Arena. All right, so it's a number one open model, surpassing Deepseek. It's tied number one in hard prompts and coding. And you can see the leaderboard right there. All right. So, if we go over to LM Marina, you can also get access to it over here. And then if we go to the leaderboard, you can see Llama Maverick ranking number two. All right. Only surpassed by, like I said before, Gemini 2. 5 Pro, which I would say is probably the best AR model out there right now as well. All right. But you can see here outperforming chat GPT40 Gro 3 preview 4. 5 preview 2. 0 flash thinking like it's outperforming all the big boys right all the big models DeepC one Gemma 3 etc is nowhere to be seen versus this. So pretty crazy stuff. If you also want to test it inside arena, you can actually do a sideby-side comparison, right? So you can for example use chatb40 and then if we type in llama, we can select llama maverick over here and we can compare them side by side. All right. So if we give them a test prompt, we can take this one for example and we'll go back to lamarina and plug in the prompts right there. All right. And then you can see how they perform side by side. Just inside the API settings here as well, make sure if you want to use this for free, make sure that you select scout free. Okay, so make sure you select that one, not the normal scout. If you select the normal scout, obviously it's going to charge you, right? So make sure you go with a free one. And then you can see the output right there, which is not great to be honest with you. Look at that. It's just not working at all. So it's okay. I'm going to test it later versus other models. I'm just going to see as well how it performed here. It seems to have cut off. Let's grab that HTML and we'll copy that and test it on liveweave. See what we got back. Honestly, is that anywhere near the same level as call 3. 7 Sonet? No. That's the output that we got back from Ella Marina from Llama Maverick versus Chat GPT40. Let's see what 4. 40 did as well. So, we're going to grab that. So, this is the response from Llama 4. We'll plug in the 4 O response. That's 100 times better, isn't it? With the same prompt. So, I'm not sure what people are seeing in Llama Maverick. From what I've seen so far, unless it's just being rinsed on the server or it's just I'm using it the wrong way, it doesn't seem to be anywhere near as good as other
15:25

Conclusion and Community Invitation

models. So, thanks much for watching. If you want to get access to my community, helps you save time and make more money with AI, feel free to get that link in the comments description. Prices are going up this month, so make sure you sign up now before you miss out. This also comes with all my best automations, agents, workflows, etc. You can also get a crash course on AI along with all my best SAPs. And along with that, we've got an amazing community of people you can ask questions to. So, for example, we have 693 members you can post in the community. Everyone's focused on the same goal, which is making money and saving time with AI. Feel free to post any questions you have. And also, this comes with weekly Q& A, right? So, if you want to jump on live calls, get your questions answered, feel free to jump on the weekly Q& A calls and ask any questions you have. And additionally, you can watch back the call recordings like you can see right here. Additionally, if you want to get a free one-to-one SEO strategy session that shows you how we take websites from zero to 145,000 bits this month and generate hundreds of thousands of dollars in sales on autopilot, feel free to get that on this free link building acceleration session. You'll get a free SEO domination plan. That's a customtailored game plan to make you more money and save time with SEO to get more customers. And you can also ask any questions you have live on that call onetoone with one of our experts. So feel free to get that link in the comments description and appreciate you watching.

Ещё от Julian Goldie SEO

Ctrl+V

Экстракт Знаний в Telegram

Экстракты, дистилляты и транскрипты — проверенные знания из лучших YouTube-каналов.

Подписаться