Meta’s Most Powerful AI Model Just Leaked -  (Meta Avocado)
12:35

Meta’s Most Powerful AI Model Just Leaked - (Meta Avocado)

TheAIGRID 08.02.2026 15 311 просмотров 356 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Meta’s Most Powerful AI Model Just Leaked - LLAMA 5 Explained (Meta Avocado) Checkout Free Community: - https://www.skool.com/theaigridcommunity 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid Links From Todays Video: https://www.theinformation.com/articles/meta-memo-new-avocado-model-capable-date?rc=0g0zvw Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (3 сегментов)

Segment 1 (00:00 - 05:00)

So Meta is working on a new AI model. I've got all the details. So let's talk about it. So we've got this article that says Meta Platforms is sounding increasingly bullish note about the first major AI model expected to emerge from its new AI group. And I think I'm getting a little bullish too. And you will understand why as you begin to dive into the article. So the article talks about the fact that you know in their group the Meta super intelligence labs which is the newly formed labs they have a new model codenamed Avocado and apparently this is now Meta's most capable pre-trained base model to date according to an internal memo which was viewed by the information. The information is a super reputable source when it comes to AI. So you have full confidence that they're telling the truth. Now, in this memo dated January the 20th, the company said that he had completed the pre-training for avocado, an initial phase in the development of AI models in which they learn general knowledge, the patterns, and relationships after being exposed to vast amounts of data. Essentially, what they're saying here is this is just one of the major checkpoints when it comes to building LLMs. And so far, it looks really good. Now, you guys want to know the craziest thing about this is that the memo posted by Megan Fu, the Super Intelligence Labs products manager, said that Avocado outperformed the best open-source base models. And if you wonder what base models are, those are pre-trained LLMs that have not been polished through the final phase of post- training. But I still think that this checkpoint, outperforming some of the best open- source base models, is still a very good checkpoint. Most people don't realize just how difficult this is, but you have to remember there have been numerous breakthroughs and there is literally new research being published every single day. Now, take a look at this. This is where things start to get a little bit crazy and meta might seem like they're about to take the number one spot. So as I was diving into this, it spoke about the fact that despite the fact that Avocado had not yet gone through post-training, Avocado was competitive with leading post-train models in knowledge, visual perception, and multilingual performance according to the memo. Now, do you know why this is so incredible? Is because this means that the core model itself is already strong, not just the polished version. When you understand how AI models are trained, you have to understand that models first go through pre-training, which is where the model learns patterns from data sets across the internet, books, images, and code. And then second, you have post-training, which is where you refine the model using the techniques like human feedback. So it becomes safer, more reliable, and better at following human instructions. And most AI products that we use today are only good after that second phase of post- training. Now, what's crazy is that Avocado is performing very well even without post- training, which is pretty crazy. And historically, the most improvements come from post- training, suggesting that the raw intelligence of Meta's new AI model might actually be catching up faster than expected. Now, what's crazy about this is this actually means that Meta's AI research pipeline is probably a lot stronger than most people assume. And this matters because if you've been paying attention to what Meta has been doing on the back end, they've got massive compute, massive data, and they've got massive open model distribution. And of course, they're the leaders of the biggest social media platforms. Now, here's the thing, okay? When you're discussing AI models, you always have to discuss them in a reasonably balanced way because of course, we know that the AI industry is susceptible to hype. But of course, there is one major thing going on here that most people may have forgotten. Meta have had a little bit of a rough history. Now, the article says here that there's no way to gauge whether or not Meta's optimistic internal view of its AI models will stand up to scrutiny until Avocado is publicly released. Still, it would be risky for the company to overhype that progress even inside the company given the previous struggles in the past year. And that is incredibly true. someone who's been in this AI space for well over 3 years now, it has been incredible to see Meta's rise and fall and hopefully rise again. Now, if you're unfamiliar with what I'm talking about, let me give you guys a very, very quick history lesson. Meta was essentially frauding the benchmark result. And in 2025, Llama 4 was just a complete disaster. It was so bad that even some people had to resign. And it was literally to the point where people were removing their names from the Llama 4 paper saying that look that reputation is so bad I'm not even going to put my name on that because that is absolutely awful. You can see here there's an article that says Meta Exec denies the company artificially boosted the Llama 4 benchmarks. It was an absolute PR disaster. I mean there was literally an article I'm going to have to bring up for you guys now just to show you guys how bad it was. So, what you're looking at here is a small bit of an article that was released this year and essentially it talks about the fact that Meta actually changed the benchmarks. We've never really seen that from any major AI tech company. And Meta literally fudged the benchmarks to make

Segment 2 (05:00 - 10:00)

it look like Llama 4 was better than it was. And this was being said by Yan Lan. Okay, this is a guy that works at Meta. And he was like, "Look, I'm honest. Okay, those results were fudged a little bit. And when this came out, this was, you know, completely a PR disaster. A lot of people were already questioning Meta's results. And now we really do have confirmation of what happens. You can see it was so bad that Mark Zuckerberg lost faith in everyone that was involved in this, literally everyone. And he basically sidelined the entire Gen AI organization to launch the super intelligence lab. So that much change, you have to understand, was not a small amount. That was a really incredible mistake. And you have to understand that it's pretty crazy. And Yellen kind of literally said, "Look, a lot of people have left and a lot of people who haven't left yet will leave. So, it's pretty much being rebuilt with a new entire team. So, Llama 5, Avocado, whatever you want to call it, I do think that it's probably going to be a good model because Meta won't be able to save face again. If Meta do have another PR disaster, I'm not sure how they do come back from it considering the competitiveness of the AI space and the rate at which the models are improving. Now, what's super interesting also is that they talk about the 10 times compute efficiency and they say that Meta were also working on progress that are going to help them control those costs. They said in a separate memo from mid December, Meta said it's seeing 10 times compute efficiency wins with Avocado on text related tasks compared to Maverick and over 100 times efficiency gains on over behemoth a version of Llama 4 that meta delayed last year which they never released. Now I don't think they released Behemoth because it probably was pretty terrible. But I think you guys have to understand here that compute efficiency has gone up remarkably in a year. The last time that they were releasing these Llama 4 models was literally over a year ago. And considering the rate of advancement in AI, a lot has happened since then. So, I don't think it's going to be that hard for Meta to be able to release AI that's significantly better. Now, Meta said that they managed to be able to do this by getting more high-quality data, investing in model infrastructure through deterministic training, which is a method that ensure that the model produces consistent results when trained in the same way. They basically said that look we're basically we are going to be able to lower energy use and cost in AI development which is the crucial factor that they're trying to use to catch up with competitors. So overall Meta is trying to seek methods that are just way more efficient than the ones that are currently there. And it's probably quite likely that Meta have pioneered some of their own techniques. Most people don't realize that a lot of these research organizations, OpenAI, Google, Anthropic, they often pioneer their own research methods. But the problem is that those research methods are closed. So we don't know what they're doing behind the doors to advance those specific models. It's probably one of the reasons that Enthropic is such a good model at coding, but when it comes to other models, we really just don't have that same performance because every individual company has their own specific set of training data, the way that they train the model, and because of that, they all have their unique specific advantages. Maybe Metas might be having access to super highquality data from all of their social media products, and because of that, maybe they're going to be able to catch up with their competitors sooner than we do think. You can see here that last month Meta's chief technology on You can see officer Andrew Bothworth alluded to similar gains during a press briefing at the World Economic Forum in Davos saying that Meta's AI models were very good. So it's quite likely that Llama 5 is shaping up to be an incredible model. And you can see here that Mark Zuckerberg literally said that I expect our first models will be good, but more importantly, they will show the rapid trajectory that we're on and then I expect us to steadily push the frontier over the course of the year as we continue to release new models. So, this is probably going to mark an entirely new paradigm for Meta in terms of getting back on their feet after the terrible release of Lava 4. They're quite likely going to push the frontier with innovative methods that we've never seen before. Huge efficiency gains, meaning that those models are probably going to run faster at the same intelligence levels. And this is going to completely shake up the open-source industry. If you haven't been paying attention to what's going on here, this is a tough crowd. Recently, Kimmy K 2. 5 was released, and let's just say it does very well. You've also got GLM 4. 7, which is once again a very good model, and Deep Seek 3. 2 2 is once again a very good model. The point I'm trying to make here guys is that open tools AI is something that is rapidly getting better and better every single day with zero signs of slowing down. It's almost as if every 2 to 3 weeks we have a new open source model that beats the prior one. So if Meta are to jump back into this game, they will have to jump in and keep

Segment 3 (10:00 - 12:00)

their foot on the gas so as to not lose market share and of course mind share. Now, I do want to say that if you're watching the channel, it is a bubble. Most people have no idea what any of these models are. So, you're probably ahead of people already because I did some recent statistics and I found that most people just use Chat GPT. And I think it's only like 10 to 15% of people that know that Gemini or Clude exists. So, I think Meta their models will probably be used for internal AI products. But the point I'm trying to make here is that if they're really trying to push the frontier, which is what Mark Zuckerberg said, they're going to have a tough time doing it if they're not inventing innovative techniques. Now, the real question is, can Meta actually come back from this? Well, I think they can, and that's because of XAI. Now, if you don't know what XAI, this is Elon's company, and he basically proved that in the AI game right now, speed is everything. You don't need to drop a perfect model. You need to drop models fast, learn from them, and then keep shipping. Look at what they did with Grock 1 that came out in late 2023. Grock 2 in mid 2024 and Grock 3 in early 2025 and then Grock 4 by July. Then Grock 4. 1 in November. That's four major versions in less than 2 years. And every single version was better than the last. Grock 4. 1 literally went from rank 33 to number one on the AI leaderboards, which was insane. And they didn't stop there. They also dropped a specialized coding model, voice agents. They just kept shipping. Compare that to Meta, and Meta just simply haven't dropped anything. But think about it. Meta can recover. Meta's ad business is printing cash. That's over $70 billion which they're spending on AI infrastructure and nobody except from Google has that kind of budget. Second, they completely rebuilt their AI team. They paid $14 billion to bring in the CEO of Scale AI as their new chief AI officer. They hired the former CEO of GitHub. They brought in one of the co-creators of Chat GBT. They are building an all-star roster. And third, they're already building the new next thing. And here's the big plot twist that most of you guys may not realize is that Avocado might not be open source. Meta might actually decide this time to go fully closed source, which is a huge shift, but it means that they can actually compete without giving away their work to companies like DeepSeek, who will then just copy it. So, the takeaway here is that Meta's mistake wasn't that Llama 4 was bad. Their mistake was taking too long to ship it, letting the expectations get too high, and then getting caught trying to fake results to cover the gap. If they learn from XAI and start shipping faster, they iterate publicly, they have the money, they have the time, they have the talent now, and of course, they have the infrastructure, I'm sure Meta are probably going to be a dominant force in AI.

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник