# DeepSeek V4 Update Just Dropped… Here’s What’s Changing

## Метаданные

- **Канал:** Universe of AI
- **YouTube:** https://www.youtube.com/watch?v=PeGhwemAHZo
- **Дата:** 28.02.2026
- **Длительность:** 10:18
- **Просмотры:** 8,804
- **Источник:** https://ekstraktznaniy.ru/video/10572

## Описание

DeepSeek V4 rumors are heating up after a major DeepGEMM update, Blackwell support, and speculation around a possible delay. At the same time, a mysterious “Galapagos” model is being tested on DesignArena — potentially linked to GPT-5.3 — and Anthropic has released a statement addressing AI and national security concerns.

In this video, we break down the technical updates, model leaks, and what these moves signal about the next phase of the AI race.

For hands-on demos, tools, workflows, and dev-focused content, check out World of AI, our channel dedicated to building with these models:  ‪‪ ⁨‪‪‪‪‪‪‪@intheworldofai 

🔗 My Links:
📩 Sponsor a Video or Feature Your Product: intheuniverseofaiz@gmail.com
🔥 Become a Patron (Private Discord): /worldofai
🧠 Follow me on Twitter: https://x.com/UniverseofAIz
🌐 Website: https://www.worldzofai.com
🚨 Subscribe To The FREE AI Newsletter For Regular AI Updates: https://intheworldofai.com/

#deepseek  #gpt53  #ainews  #anthropic  #artificialintelligenc

## Транскрипт

### Intro []

We've got Deep Seek hinting at a massive new drop with hardware level optimizations, possible signs of GPT 5. 3 quietly being tested under the name Galapagus, and now Enthropic stepping in with a formal statement about AI and the Department of War. This isn't just model upgrades anymore. Is capability, competition, and geopolitics all colliding at once. Let's break down what's happening. Are you guys

### DeepSeek Update [0:23]

tired of waiting for the whale to be back? Yeah, so am I. But it looks like we might be getting closer to a serious Deep Seek model drop. Here's what's happening. There's now confirmation floating around that Deepseek version 4 Light is real and the headline specs are kind of wild. We're talking about a 1 million token context window. And that's not incremental. That's a architectural level scale. If that holds, you're basically moving into entire code bases, entire research projects, entire book territory in a single pass. It's also reportedly natively multimodal. So not bolted on vision or not patched in embeddings native multimodal that matters because the integration layer is usually where latency and quality drop. If this is baked in from the ground up that's a different class of system. Now here's where it gets a bit more interesting. There are rumors coming out of Chinese Linux forums claiming compatibility with Nvidia B200 internal coding benchmarks beating GPT series and clock. Take that second part with a caution. Internal benchmarks always favor the home team, but even the fact that they're positioning it that way tells you what they're targeting. Serious coding dominance. And then today, Deepseek pushed a major commit to Deep Gem on GitHub. This wasn't a minor cleanup. This included formal integration of manifold constraint hyperconnection, early support for Nvidia's nextg Blackwell SM100 architecture, FP4 ultra low precision compute. That's infrastructure work. That's we're preparing for scale work. Support for the Blackwell this early signals they're optimizing around nextgen GPU stacks. FPU4 ultra low precision hints as serious inference efficiency plays. And if they can maintain performance while pushing lower precision, that means cheaper deployment, faster scaling, and potentially very aggressive pricing. So, what does this mean? It means this probably isn't just a light refresh. Looks like they're lining up hardware optimization, architecture upgrades, and multimodal capability all at once. The whale might not be back yet, but the water's definitely moving. If more leaks drop, I'll break them down immediately.

### GPT 5.3 Leak! [2:32]

There's a new model name showing up on Design Arena called Galipagus. I hope I'm pronouncing that right. And according to people tracking it closely, is currently being tested with a front-end style that looks very similar to a GPT5 class model. Now, let's slow this down and unpack it a little bit. First, what is design arena? Design arena is typically where Frontier models get quietly evaluated. It's not marketing and it's not a public launch page. It's more of a testing ground. So, when a new model name appears there, especially one that's not publicly announced. It usually means internal or pre-release experimentation. The name Galipagus shows up as its own provider. That's already interesting. In the code screenshot floating around, you can see something like ID Galipagus provider Galipagus as well display name Galipagus and obviously the open source is set at false and private is set at true. So this is clearly not an open model and it's marked private. It's not active publicly and it doesn't have a visible API model string attached yet. Now here's where it gets more interesting. People noticed that when OpenAI tested GPT 5. 2 to previously on design arena. They use different internal model names. For example, there are code names tied to a reasoning effort levels low, medium, etc. In the screenshot, you can see actual reference like GPT 5. 2 medium, GPT 5. 2 low, reasoning effort medium, reasoning effort low. That tells us OpenAI has already experimented with routing different reasoning budgets under different model names. So the big question becomes, is Galipagus a brand new GPT model or is it a router? It could be one of two things. Option one, Galipagus is nextG GPT build being tested quietly before announcement. Maybe the long- aaited GPT 5. 3, maybe even something larger, maybe something optimized differently. Option two, and this is honestly very plausible, Galipagus might just be a routing layer, a wrapper model that dynamically selects between reasoning efforts or subm models under the hood. The tweet even mentions that it could be a router to a different reasoning effort. And that would make sense because if you look at how OpenAI has been evolving its architecture, they're clearly moving toward dynamic compute allocation. Instead of one fixed reasoning level, you might have a system that decides in real time. Is this a lightweight question? Is this a heavy code reasoning question? Or does this require long change of thought depth and then it routes it accordingly. If Galipicus is that kind of orchestration layer, that would actually be huge for a lot of people on a regular basis. It would mean that the next evolution isn't just a bigger model, but smarter model selection. Now, the front-end similarity to GPT5 model also matters. If the output style matches GPT5 behavior, patterns, tone, formatting, reasoning, cadence, that strongly suggests that this isn't a random experimental architecture, is part of the GPT lineage. Another subtle detail, the provider is labeled Galipagus rather than OpenAI in that snippet. That could be a placeholder or it could indicate separation during testing to prevent leaks that directly reference OpenAI internally. We kind of seen that playbook before. So, where does this leave us? Right now, there's only one Galipagus model instant being tested, no variance, no visible API tag, and no official confirmation that suggests early stage evaluation. If this were a major imminent release, you'll probably see multiple reasoning tiers show up first. Low, medium, high, different API strings, version stamps. Instead, we have one private entry, which means either it's very early, it's a stealth upgrade, or it's an internal orchestration layer being tested quietly. Now, timing wise, this is interesting because people are already asking about GPT 5. 3, and I posted earlier leaks and things like that on this channel as well. If OpenAI follows their pattern, incremental updates often appear quietly in testing environments before being announced. And the fact that this surface at all means something is moving internally for sure. The name Galipagus is also symbolic. Galipagus Islands are famous for evolution. And if that's intentional, which let's be honest, these code names usually are, that suggests adaption, variation, and selection. That fits very cleanly with a routing or multi-reasoning architecture. So my take, I don't think this is just a random model. I think this is either a GPT 5. 3 in salt testing or a next generation reasoning router that could redefine how GPT models allocate compute. Either way, something is brewing and if we see additional code names show up over the next week, especially tied to reasoning tiers, that's when we'll know for sure. I'll keep tracking this closely as well. All

### Anthrophic vs Pentagon [7:14]

right, this is an important one. Anthropic just published a statement titled statement on the Department of War. And before this turns into speculation, let's ground this in what they're actually saying. The core of the statement addresses concerns around how advanced AI systems could be used in military contexts, particularly in decision-making systems related to warfare. Enthropic is essentially clarifying its position. They state that they're not building AI systems intended to autonomously conduct warfare. They emphasize that their models are not designed to make lethal decisions, control weapons, or operate as battlefield command systems. That distinction matters because as models get more capable, better reasoning, better planning, longer context windows, the line between general intelligence system and strategic decision engine starts to blur. Enthropic is drawing a boundary. They [clears throat] acknowledge that AI will inevitably intersect with national security and defense use cases. That's not avoidable. Governments are major users of advanced technology, but they're positioning their role as focused on safety, guard rails, and responsible deployment, not autonomous warf fighting. Another key part of the statement is about governance. They emphasize transparency, oversight, and alignment research. They're reinforcing that frontier models need strict safety evaluations before deployment, especially in highstake domains like defense. This is consistent with Anthropic's broader brand positioning. They always lean heavily into safety first narratives, constitutional AI, control deployment, eval heavy rollouts. So this statement fits their long-term strategy of being seen as the cautious alignment focused lab. Now zooming out, why release this statement? The AI arms race narrative is getting louder. You have rapid model capability increases, governments actively investing in AI defense initiatives, infrastructure scaling globally, and geopolitical tension around frontier AI models is obviously increasing. In that environment, public positioning matters. Anthropic likely wants to clearly communicate where they stand before assumptions are made about military integration. Importantly, they're not saying AI shouldn't ever intersect with national security. They're saying autonomous lethal use is not what they're building. That nuance is key. So, this isn't a dramatic pivot. It's more of a clarification, but it signals something bigger. AI labs are increasingly aware that their systems are powerful enough to raise real geopolitical concerns, and they're starting to publicly define the lines that they won't cross. We'll probably see more statements like these from major labs as capabilities scale further. Because as these systems approach higher levels of reasoning and planning, the conversation shifts from productivity tools to infrastructure level power. And at that level, positioning and capabilities and responsible use matters a lot. But

### Outro [10:06]

that's it for today's video. Make sure to subscribe to the channel, follow us on Twitter, follow the world of AI, and don't forget to subscribe to our newsletter. We post constantly, and you don't want to miss this. I'll see you guys in the next
