NEW DeepSeek 3.2 AI DESTROYS Gemini 3.0? (FREE!) 🤯
12:05

NEW DeepSeek 3.2 AI DESTROYS Gemini 3.0? (FREE!) 🤯

Julian Goldie SEO 02.12.2025 4 473 просмотров 74 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Want to make money and save time with AI? Join here: https://www.skool.com/ai-profit-lab-7462/about?el=DeepSeekv32&htrafficsource=YouTube

Оглавление (3 сегментов)

  1. 0:00 Segment 1 (00:00 - 05:00) 1031 сл.
  2. 5:00 Segment 2 (05:00 - 10:00) 1069 сл.
  3. 10:00 Segment 3 (10:00 - 12:00) 483 сл.
0:00

Segment 1 (00:00 - 05:00)

Today we're going live on the most powerful AI model that just released from Deepseek. This model be G5 and Gemini 3. 0 Pro on almost every single test. It won gold medals in coding contest. It crushes math problems. And here's the crazy part. You can actually run it locally as well. I'm going to show you what this model can actually do. We're going to run through exactly how it works, etc. You can see the white paper here. This is the white paper from Deepseek. They've literally just announced this a few hours ago. So you can see the announcement literally 11 hours ago from Deepseek on X. As you can see, it was a pretty chill launch considering the benchmarks. If we pull up these benchmarks, as you can see here, you can see that we've got Deep Seek V3 Special, Deepseek V3. 2, Thinking, T5 High, Claude Force on it, and Gemini 3. 0 Pro. Now, what's interesting about this, if you look at the benchmarks here, you can see that in almost every test, Deep Seat 33. 2 to special is either winning or is right up there with Gemini 3. 0 Pro. So, we're going to be looking through the white paper and the tech report just seeing how it performs as well. I'll be running through exactly how you can use it locally, too. So, you can see here they've said we introduce Deepseek V3. 2 on the white paper. A model that harmonizes high computational efficiency with superior reasoning agent performance. All right. And then you can see the benchmarks here. So, this is how it performs versus its counterparts. code forces S SW lifier terminal bench 2. 0 and tool decathlon right now the special is a different model to the thinking model as you can see and if we compare these side by side you can see that for example the special model scores 96 compared to Gemini 3. 0 pro which is probably the most impressive model lately which scores 95 on a maths test right so you see here that actually for math tests DC3 3. 2 too is actually outperforming these. Right? If we have a look through the details inside this, you can see that they've said release of reasoning models marks a pivotal moment in the evolution of LLMs and since this milestone, the cap capabilities of LLMs have advanced recently. Whilst the open- source community, so for example like Miniax which is Chinese, MainShot which is also Chinese and this one as well which I've not come across to be honest with you continues to make strides the performance trajectory of closed source propriety models anthropic and deep mind and open eye has accelerated at a significantly steeper rate which is quite interesting. So basically they're saying like the LLMs that are local haven't quite evolved at the same speed at which the closed source models have and so consequently rather than converging the performance gap between closed and open source models appears to be widening with proprietary systems demonstrating increasingly superior is complex tasks. That means basically it's like they're just saying okay that the local models and the open source models not developing at the same speed at which the closed models are. And then it talks through like the architecture and how this works etc. What we can actually do is I want to take this information and then we can just go over to something like CL and just break this down into a simplified version cuz I think for a lot of that content mo it's going to go over most people's heads. So let's say okay break this down the third grade level explain the key plus most notable findings from this white paper. And then we've taken the white paper and broken it down. Now in the meantime whilst Claude is doing that let's have a look here. So you can actually get this on hugging face as you can see. So this is the newest model from deepseek and it's designed for like reason and also a gentic AI. So you can see how it's broken down here with the benchmarks again and it actually breaks down how to run this with the chat template and you can also run this locally. You can see here how to run locally and then basically with hugging face you can use the repo on GitHub and start using this and running it locally right so if we open up as you can see we've got the details here and this is for the experimental release so 3. 2 to experimental that you can run locally and you see how it performs. It does outperform 3. 1 and you can run it locally and then got the open source details and then you can run it locally with hugging face right like that. If you actually go to deepthink. com you got a choice between you can use v3. 2 here as well right and then you can select between deep think and non deepthink. Let me just double check here. So it does say on the website that is v3. 2 let me just ask it. I don't think this will be using the latest model. Also, it's all in Chinese for some reason. My Chinese is not that good, but it won't actually tell you if it's v3. 2 or not. Yeah, you can run it locally. You can also use Docker as well as you can see. So, you can run this locally with Docker too. Yeah, essentially that's how it works. And then if you go to hugging face sometimes on hugging face, you can actually use the model on an inference provider, but you actually can't with this model. So, if you want to use this model, you've got a few options. So you could use V LLM, Kaggle, Google Collab or Transformers and then you just have to figure out how to run it from there essentially. Now if you have a look on O Lama, it's not uploaded on yet, but if
5:00

Segment 2 (05:00 - 10:00)

you want to get the old versions of Deep Seek, you can. You just run it locally with OAM and yeah, there you go. So let's talk about okay, why is it important? So Deepseek v3. 2 simple breakdown. Let's have a look here. So obviously Deepseek is a Chinese AI company and they just released Deepseek V3. 2. Think of it like a new smarter robot brain. Essentially, what they're trying to do is catch up to the big boys at a fraction of the cost. So, Deep 3. 2 performs nearly as well as GT5 and Claude 4. 5 Sonet, which are both closed source models on most tasks. And the special version actually beats GT5 on maths and coding competitions. Terms of how it works. So, there's been three key breakthroughs. Number one is smarter attention DSA. So when AI reads long documents, it slows down and gets expensive because it has to look at everything at once, right? So what they've done is create a ling indexer that quickly scans and picks only the most important parts to focus on. So it basically like skims through it's like kind of skimming through a book for the good bits instead of reading every word, right? And so even at a much lower context window, for example, even when it's reading long documents, it's dramatically lower in terms of using the token count. Also, it's got more training. So most AI companies do post training which is fine-tuning after the main training. It's a small afterfall. Deep seek spent over 10% of their entire pre-trained budget on post training with reinforcement learning. Right? They also shared specific techniques they use to make RL training stable at scale. And also the final one is synthetic AI training. So they created a pipeline like this. And then you can see that they basically they've designed the special model to perform a genic capabilities, right? And then you see here their special model actually hit gold medal performance at the 2025 international maths olympiad and international Olympiad in informatics. So why is this interesting for you? Number one, it's open source. You can run it yourself or fine tune it and run it locally. Number two, agent capabilities. So they have a synthetic training pipeline which is basically designed to use tool use and improve tool use. Number three, cost efficiency because they're using that DSA as we were talking about before, smarter retention method that actually makes everything much more cost efficient. And then they've got the thinking right method which calls multiple tool calls instead of starting fresh each time. And also they've outperformed in some ways other tools like for example claude chat GPT and Gemini 3. Pro honestly for me. So Alex says which do you think is better deepse 3. 2 or Gemini 3. 0 Pro? Honestly, like if it's not that easy to use, like if you've got to go over to hugging face and then mess around with code and everything else, I probably won't use it just because it's going to take up a lot of my time and I ain't nobody got time for that. But if it was inside the chat here, I'd be interested to test out and see how it performs. Again, you can use Deepseek V3. 2. I just don't think it's a new models that you've got. And then also, let's have a look on Open Router. I think you can get access to this as well. So, if we go to models, I type in DeepSeek actually. Yeah. So, V3. 2 2 is not available from router, but you can just go over to deepseat. com and then you got v3. 2 over here. So for me, I'll probably stick to gemini 3. 0 pro, but we can use deepse 3. 2 inside the chat here as well. But yeah, again, if I have to do all this crazy coding and stuff like I know how messy that can get and just being 100% honest with you, probably wouldn't use it that much to be honest. Let's see what people are saying on X about it. So it's already is friending on X. See what we got here. Let's get the DLO. So, it's 25 times cheaper than G5 and 30 times cheaper than Gemini Pro. That's for the open reasoner model. I like the way they're so chill about announcing it as well. So, if you have a look at the benchmarks here, you can compare them. Again, special is a lot more powerful than thinking. So, you got DC 3. 2 thinking and special here. So, this is a maxed out reasoning capabilities. And then look at this. for Amy, 96 versus 95 for Gemini, 99 versus 97. 5 for HMMT, 94 versus 93. According to Deep Seeks benchmarks, which you always want to take with a pinch of salt, it is outperforming many of these different models. Let's see what else we got here. So, Kim says, "First, I think some people just don't understand how massive this release is. They're the first even ahead of OpenAI and Google to release gold IMO CMO 2025 and II 2025, an ICPC world finals model. Everyone now has access to such an outstanding model and that's a good thing about it being open source, right? The claim that open source is eight months behind closed source seems to be refuted. Open source is catching up with closed source and is only slightly behind right now and deepc 3. 2 is an open source reasoning model that closes much of the gap to Frontier systems like GT5 and Gemini 3. But yeah, it's going to get interesting. I can't wait to see the new Deep Sea Car 2. That will be very interesting. I think once Deep Sea Car 2 comes out, that's just going to change everything. But I [clears throat] think that's pretty much it in terms of the update. So we'll keep it short and sweet today. We've covered Deepseek. If you want to get all the video notes from today, you can get it inside the AI profit boardroom. Just go to the classroom here and then scroll down. Go to SOP updates and you can see we've updated this so far to December. All the December 2025 trainings we got
10:00

Segment 3 (10:00 - 12:00)

here. We got full video on how it works, resources and tools um and explanations of how it works, etc. So you can just go to the classroom SAP updates and then it's right there. So here's what's available for you right now. Okay, so the AI profit boardroom, if you've not already checked this out, is an awesome community where you can learn this stuff. Obviously, you found out the theory of Deepseek and the new release, how it works, etc. what it means for the world, how it's open source, how it performs better than Gemini Pro in some ways, and also how to access it and run it locally. If you want to learn how to implement this stuff inside your business, check out the AI Profit Board. comes with an amazing community of 1,800 members here who are learning and implementing AI into their business. You can see all the wins and all the people getting results with AI as you can see down in this feed. Additionally, you get access to all of our best trainings that shows you, for example, how to learn AI automation of our six week road map. You also got our playbook here with all the systems I personally use, email content or NS templates, the agency course, and also Q& A call recordings. Plus, you get four weekly calls. So you can jump on these live calls, ask any questions you have, get help, get support, etc., and just problem solve together. And the thing I would say here is you get access to over 50 complete automation blueprints. And these are not just ideas. These are step-by-step guides with screenshots, prompts, and workflows that you can start using inside your business today. You also get live coaching sessions every single week where we build automations together. You bring your specific business problem. We solve it live and everyone learns through it. You also get access to the private community where over 1,000 solrepreneurs share what's working right now. If someone gets better prompt, they post it or someone builds a better useful automation, they share the workflow. Someone finds a new tool, everyone knows about it within a minute. So that's inside the AI profit border. And we have a Cyber Monday deal on right now. So make sure you sign up now before you miss out because with that Cyber Monday deal that's closing today, of course, and then the price goes up. So feel free to check this out. You can try it out as well. We have a refund guarantee. So you can like just check it out. If you don't like it, we'll high five you on the way out and say thanks for trying it. And you can get all this inside the link in the comments description. We see inside that. Cheers to watch it.

Ещё от Julian Goldie SEO

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться