AI News: DeepSeek R2 in 2 Weeks, Google Genie 3 Released & Google Stitch Gets Agents!
9:46

AI News: DeepSeek R2 in 2 Weeks, Google Genie 3 Released & Google Stitch Gets Agents!

Universe of AI 30.01.2026 4 365 просмотров 107 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Breaking AI news today! DeepSeek R2 drops in 2 weeks, Google Genie 3 is now available for creating interactive worlds, and Stitch gets Deep Design mode with Agent Manager. Here's everything you need to know. For hands-on demos, tools, workflows, and dev-focused content, check out World of AI, our channel dedicated to building with these models: ‪‪ ⁨‪‪‪‪‪‪‪@intheworldofai 🔗 My Links: 📩 Sponsor a Video or Feature Your Product: intheuniverseofaiz@gmail.com 🔥 Become a Patron (Private Discord): /worldofai 🧠 Follow me on Twitter: https://x.com/UniverseofAIz 🌐 Website: https://www.worldzofai.com 🚨 Subscribe To The FREE AI Newsletter For Regular AI Updates: https://intheworldofai.com/ #ai #ainews #deepseek #GoogleGenie #technews AI news, DeepSeek R2, Google Genie 3, Google Stitch, AI updates, artificial intelligence, machine learning, DeepSeek model, Genie 3 release, AI agents, deep design mode, agent manager, OpenAI competition, AI reasoning, world models, interactive AI, AI tools, tech news, AI development, Google AI, ByteDance AI, Chinese AI, AI pricing, AI capabilities, Nvidia AI, GPT competitor, Claude competitor, AI breakthrough, generative AI, AI demo 0:00 - Intro 0:13 - DeepSeek R2 4:15 - Google Genie 3 7:50 - Google Stitch 9:28 - Outro

Оглавление (5 сегментов)

  1. 0:00 Intro 41 сл.
  2. 0:13 DeepSeek R2 694 сл.
  3. 4:15 Google Genie 3 597 сл.
  4. 7:50 Google Stitch 287 сл.
  5. 9:28 Outro 68 сл.
0:00

Intro

Today we got some absolutely massive AI news that just dropped in the past 24 hours. We're talking about three major developments that could completely reshape about how we think about AI in 2026. So let's get into it. All right
0:13

DeepSeek R2

so let's start with what might be the biggest story of today. According to a recent report from the information, Deep Seek is planning to release its next major model, which is being called R2, in about 2 weeks. Now, if you've been following the AI space, you know that DeepS absolutely shocked the industry back in January of 2025 when they dropped their R1 model. This was the model that matched OpenAI's best reasoning capabilities while costing just a fraction of what the big American tech companies were spending. It literally triggered a trillion dollar sell-off in tech stocks because it proved that you don't need hundreds of millions of dollars to build competitive AI. But here's where things get interesting. The information report mentions that Deep Seek hasn't released anything for about a year now. Even though they've actually released a version 3. 2 back in December. So, when they talk about over a year without releasing, they're probably referring to a major model release, and that's probably going to be the R series and the next generation beyond that. This timing is actually pretty significant. There's been a lot of speculation about when R2 would drop. Initially, people were talking about May of 2025. Then those rumors got pushed back. Some reports suggested it might not come out until early 2026. And now here we are in late January of 2026, and we're hearing it could be just 2 weeks away. What makes this particularly interesting is the context. According to multiple sources, Deep Seek has been dealing with some serious challenges. They're trying to train R2 on domestically produced Huawei Ascent chips. This was apparently at the encouragement of Chinese authorities, but they ran into a stability and performance issues. So, they reportedly had to pivot back to using Nvidia hardware for that critical training phase. And speaking of context, this release is happening at a really interesting time competitively. We have Miniax who dropped the M2. 1 model and are potentially dropping the M2. 2 model. Then we have Kim K 2. 5 which dropped. And both of these open- source models are pretty good. So, it's not like China only has one model now that is leading the AI race. They have multiple options and both Miniax and Kimmy are backed by investors now because both of these companies have gone public. So, the competition is getting fierce. But the information specifically mentioned Bite Dance in the report noting that the competition could get even fiercer given that Deep Seek is planning to release this around the same time as Bite Dance Next moves. For those who don't know, Bite Dance is the company behind Tik Tok, and they've been making serious investments in AI. So, we're not just talking about Deep Sea competing with OpenAI and Anthropic anymore. We're talking about major tech giants across the world going head-to-head. So, what can we expect from R2? Well, based on various leaks and reports, this model is rumored to be massive, possibly 1. 2 trillion parameters. But here's the clever part. is supposedly only uses about 78 billion parameters at a time. This is using that mixture of experts architecture that DeepS is known for. The reported pricing is absolutely wild as well. We're talking about potentially 7 cents per million input tokens and 27 cents per million output tokens. So that puts DeepSeek at a significantly lower cost compared to the leading American models. The model is supposedly going to excel at coding and maintain its reasoning powers across multiple languages, so not just English only. And there are hints it might handle multimodal inputs like images and video as well. But here's what I think is most significant. If Deep Seek can deliver on these promises, it's going to put even more pressure on the Western AI labs. We're already seeing Open AI and Google cutting their prices in response to Deep Seek's earlier releases. If R2 delivers, we could see another major shakeup in the industry. So, two weeks from now, we'll know for sure. So, mark your calendars. All right, moving on to our
4:15

Google Genie 3

second story, and this one is genuinely mind-blowing. Google just announced that they're rolling out access to Project Genie, which is powered by their Gen3 world model. Now, you might be wondering, what exactly is a world model? Well, think of it this way. Traditional AI video generators can create immersive video clips, but they're basically just movies. You watch them, they end, that's it. A world model is completely different. It's an AI system that can generate entire interactive environments that you can actually explore and navigate in real time. Google first announced G3 back in August of 2025, but it was only available to researchers and a small group of testers. As of today, January 29, 2026, they're making it available to Google AI Ultra subscribers in the United States. That's their $250 per month premium tier. So, here's how Project Genie actually works. You start by describing the world you want to create using text prompts, or you can even upload an image to use as a starting point. The system then generates an interactive environment that you can navigate in real time at 720p resolution and 24 frames per second. But here's where it gets really cool. The world doesn't just exist as a static snapshot as you move through it. Genie 3 is constantly generating the path ahead based on your movements and interactions, assimulating physics and interactions dynamically. And unlike earlier versions like Genie2, which could only handle about 10 to 20 seconds of interaction, G3 can maintain consistent worlds for several minutes. The system has what they call visual memory that can recall things from up to a minute ago. So, if you walk away from an object and come back, it's still there where you left it. They've also added something called promptable world events. This means you can actually change the world on the fly using text commands. If you wanted to make it rain, you can just prompt it. Want to add an animal or change the weather? You can do that, too. Now, Google isn't positioning this as just a cool toy. They see a real application here. They're already using Gen3 to train their SEMA agent, which is their scalable, instructible, multi-world agent. In test, Sema was able to accomplish multi-step tasks like navigating to specific objects in virtual warehouses, all within these generated worlds. For game developers and creators, this could be revolutionary. Instead of spending months building environments in traditional game engines, you could possibly prototype entire levels or worlds just by describing them. And for robotics and autonomous vehicle training, this is also huge. You can run endless simulations in completely safe virtual environments before testing anything in the real world. Now, it's not perfect. Google is upfront about the limitations. The generated worlds don't always look completely photorealistic or adhere perfectly to prompts, and text rendering within environments is imprecise unless you explicitly include it in your prompt. and you're limited to a few minutes of continuous interaction rather than extended hours. Also, and this is a big one, is only available to Google AI Ultra subscribers in the US right now. That's a pretty hefty price tag at about $250 per month. But Google says that they're planning to expand to more territories and eventually make it more broadly accessible. But still, even with these limitations, this is a massive step forward. We're talking about AI systems that don't just generate content, but generate entire interactive realities that understand physics and respond to your actions. That's pretty incredible. Our third
7:50

Google Stitch

story is based on some leaks that came out from testing catalog news, and it gives us glimpse into what Google is working on for their Stitch design tool. For those who don't know, Stitch is Google's AI powered UI design tool that came out of Google Labs. is basically like having an AI designer that can turn your text descriptions into actual functional user interfaces. You describe what you want and it generates the design along with the code. According to these leaks, Google is working on something called deep design mode for Stitch along with the potential agent manager and voice input capabilities. Now, these features are apparently in very early stages and part of an upcoming UI overhaul, but let's talk about what this could mean. The deep design mode is particularly interesting. While we don't have all the details yet, the name suggests that this would be a more advanced version of Stitch that goes deeper than surface level UI generation. Think about how we currently have different levels of AI models, like how there are pearl models for more complex tasks and Flash models for quick iterations. Deep design mode could potentially be Stitch's version of that, a mode that spends more computational power to create more sophisticated, polished, and thoughtful designs. This could mean better understanding of design systems, more nuance layout decisions, better accessibility, or even the ability to generate entire design systems rather than just individual screens. The fact that they're calling it deep design rather than just advanced design or pro mode suggests that they might be using more sophisticated reasoning or planning capabilities, possibly leveraging some of the advances we've seen in models like Gemini 3. Make
9:28

Outro

sure to subscribe to our channel. We do real tests, not just headlines. Make sure you're also subscribed to the world of AI. And don't forget to check out our newsletter for deeper breakdowns you won't see on YouTube. And I'm growing my Twitter following, so make sure you follow me on Twitter as well. Hope you guys enjoyed today's video and I'll see you in the next

Ещё от Universe of AI

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться