AI NEWS: Gemini 3.5, Claude 5.0, Kimi K2-VL & OpenAI’s AI Pen!
9:33

AI NEWS: Gemini 3.5, Claude 5.0, Kimi K2-VL & OpenAI’s AI Pen!

Universe of AI 06.01.2026 3 914 просмотров 93 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
AI feels quiet right now — and that’s usually when things are being prepared. In this video, I walk through the signals around Gemini 3.5, Claude 5.0, Kimi K2-VL, and OpenAI’s AI pen, and explain why it feels like we’re entering the next phase. For hands-on demos, tools, workflows, and dev-focused content, check out World of AI, our channel dedicated to building with these models: ‪‪ ⁨‪‪‪‪‪‪‪@intheworldofai‬ 🔗 My Links: 📩 Sponsor a Video or Feature Your Product: intheuniverseofaiz@gmail.com 🔥 Become a Patron (Private Discord): /worldofai 🧠 Follow me on Twitter: /intheworldofai 🌐 Website: https://www.worldzofai.com 🚨 Subscribe To The FREE AI Newsletter For Regular AI Updates: https://intheworldofai.com/ #Gemini #Claude #ChatGPT #AI #ArtificialIntelligence #OpenAI #AIModels #AINews #FutureOfAI #UniverseOfAI Gemini 3, Gemini 3 Flash, Google Gemini, Claude 4.5, Claude Opus, Claude Sonnet, ChatGPT, GPT-4, GPT-4.1, OpenAI GPT, AI models 2025, latest AI models, AI news, AI updates, artificial intelligence, large language models, multimodal AI, AI agents, Gemini 3.5, Claude 5.0, Kimi K2, Kimi K2-VL, Kimi AI, Moonshot AI, OpenAI new model, OpenAI AI Pen, OpenAI hardware, AI pen, AI devices, future of AI, next AI models, upcoming AI models, Google AI, Anthropic AI, OpenAI, Universe of AI, AI analysis, AI trends 0:00 - Intro 0:18 - Gemini 3.5 1:59 - Claude 5.0 3:31 - Kimi K2-VL 5:59 - OpenAI's AI Pen 9:06 - Outro

Оглавление (6 сегментов)

  1. 0:00 Intro 65 сл.
  2. 0:18 Gemini 3.5 286 сл.
  3. 1:59 Claude 5.0 288 сл.
  4. 3:31 Kimi K2-VL 447 сл.
  5. 5:59 OpenAI's AI Pen 538 сл.
  6. 9:06 Outro 86 сл.
0:00

Intro

All right, there's been a lot going on in AI lately, not just in the usual loud way. Instead of big launches, we're seeing hints from Google, from Entropic, from Kimmy, and even from OpenAI's hardware side that the next set of models and products might be closer than it looks. Let me walk you through what's been popping up. So, let's get into it.
0:18

Gemini 3.5

Let's start with Gemini first. Just to be completely clear, Google has not announced a new Gemini version publicly. There's no official Gemini 3. 5 blog post, no model car, nothing like that. But here's the thing. If you followed Google long enough, you know they don't usually hype the next thing while they're still stabilizing the current one. And if you look at what they have been doing since Gemini 3 launched, the focus has shifted in a pretty obvious way. Instead of big announcements, we've seen Gemini Flash, which is clearly about speed and cost. Gemini being pushed deeper into search, docs, Gmail, and workspace. steady changes and refinements to the Gemini API itself. That's not experimentation and that's not research. That's what happens when a system already works. And now the question becomes, can we actually rely on this at scale? When Logan says time to start shipping again, I don't hear new model drops tomorrow. I hear something more like we're confident enough in this foundation to move forward and drop something very soon. And historically, that's the phase right before something changes. not a revolutionary leap. More like better reasoning consistency, fewer weird edge cases, better agent behavior, smoother integration with real products. And the thing is, Google doesn't usually do that work unless they expect people to actually use what comes next. So whether the next step is called Gemini 3. 5, Gemini Next, or something else entirely, the setup looks familiar. tightening things, cleaning things up, making sure it behaves well under real world pressure, and that's usually the unglamorous part that happens before something new shows up. So, keep your eyes open for this one. Now, let's talk
1:59

Claude 5.0

about Claude. What you're seeing right now is a tweet showing that Opus 3 is finally being retired from the Entropic models. Now, when a company like Entropic is removing a model, they're not just doing it for fun or they're not just removing it for no reason. It usually signals that there's a newer version of this model coming. Could this be the sign of Claude Opus 4. 5? We don't know. But we do know for sure that Claude Office 4. 5 is clearly the workhorse. If you talk to people who use Cloud seriously on a daily basis for things like coding, long document analysis, and anything like that, that's the model they stick with. It's not a flashy model, but it's steady and powerful, and it competes strong against Gemini 3 Pro and Chad GPT 5. 2. 2. And with Google dropping Gemini 3 later in the year last year and GPT 5. 2 dropping around the same time, Entropic has been quiet for a long time. It would make sense for them to drop something new, especially as 2026 has just started and both of these companies are racing to drop the newest models and gain market share. But Entropic has always moved in this pattern. They stabilize the current best model, retire the older one, then move forward. They don't overlap things for long. and they don't keep bunch of legacy options lying around. If something new is coming from Entropic, it's probably not about being dramatically smarter in a benchmark sense alone. It's probably about doing the same kind of task with less friction. Better tool use, better long horizon reasoning, and fewer moments where the model just loses the thread.
3:31

Kimi K2-VL

Kimi is one of the few cases right now where we're not just reading between the lines, we're actually seeing things being tested in public. So, here's what we know, and I'll be careful with the wording. There's a model showing up under the name Kiwi Doo on places like Ella Marina. Multiple people notice it independently and it clearly identifies itself as a Kimmy model from Moonshot AI. That alone doesn't mean much. Model names come and go all the time. But then things got more interesting. A well-known community tester, AI Battle, ran Kiwi do through a vision-based reasoning test, specifically from the VPCT benchmark. These aren't casual image caption test. their physics style problems where models usually struggle. And according to those texts, Kiwi Doo got all of the cases tested correct. Now, here's the important part. The Kiwi Doo is likely the upcoming Kimmy K2VL model. And that lines up with something we already had on record. The Moonshot team has already confirmed in AMAs and on Hugging Face that they plan to release a K2VL, meaning a vision language version of Kimmy K2. So, this isn't someone guessing out of nowhere. What we're seeing looks like prior confirmation from the team that K2VL is coming, a new Kimmy model appearing in public testing environments, strong performance, specifically on vision reasoning task, and if you put all those together, it's very reasonable to say that this is likely an early test version of Kimy's vision model. It's not confirmed released yet, and it's not shipping tomorrow, but clearly being exercised in the open. What makes this specifically interesting is how it's being tested. These aren't marketing demos. They're stress tests. Things like spatial reasoning, physics, intuition, understanding how objects interact. That lines up perfectly with what Kimmy team said earlier about their priorities. Reasoning first, agents first, and now vision layered on top of that. It also fits with what they admitted was hard about K2. Getting consistent think tool, think behavior. Vision models make that even harder. So seeing strong early results here is meaningful. And stepping back for a second, this is exactly the kind of signal you expect before a model is formally announced. The team has already said that they're working on it. The community starts seeing something new. It shows up under a placeholder name and people test it quietly. That doesn't mean a release is imminent, but it does mean that this isn't theoretical anymore and is actually in production. Kimmy isn't just talking about Vision. They're clearly testing it. And that's why Kimmy feels like one of the most credible Next Model stories right now. I
5:59

OpenAI's AI Pen

want to end with OpenAI because this one feels different from normal model rumors. This isn't about benchmarks or version numbers. It's about hardware and more specifically how OpenAI might want people to interact with AI long term. Here's what we know and what we can reasonably infer. John IV is working with OpenAI on a new hardware project. This alone is a big deal. This is the person who designed the iPhone, the iPod, the MacBook, and now he's collaborating with OpenAI on what's expected to be their first serious hardware project currently targeted for 2026. OpenAI CEO Sam Alman has also commented on this publicly, saying the device will be simpler than an iPhone. That's an interesting choice of words because simplicity is exactly where recent AI hardware experiments have failed. Now, here's where things get more specific. A recent leak suggests OpenAI's internal hardware project code name Gumdrop may take the form of a pen-like device. The important caveat, this is not confirmed by OpenAI, and there are no official specs. But the source claims OpenAI is evaluating multiple hardware concepts and that pen style device is one of the leading candidates, possibly alongside a portable audiobased assistant. And honestly, when you think about it, a pen is a very intentional choice. It's not a phone replacement and it's not a wearable that demands attention. It's something people already use when they're thinking. If this ends up being real, the pen wouldn't need to do much to be useful. It could capture handwritten notes, transcribe them instantly, send them into chat GPT, summarize, organize, or expand on them. And that's not a new habit. It's an upgrade to an existing one that people do on a regular basis. And that's the big difference from devices like the human AI pin or rabbit R1 which tried to invent entirely new interaction models. This feels more like let's make something people already understand but make it smarter. Another interesting detail from the leak is manufacturing. The report claims OpenAI doesn't plan to manufacture this hardware in China and instead may work with Foxcon through facilities in Vietnam or the US. Again, not officially confirmed, but if true, it suggests OpenAI is thinking about supply chain flexibility, geopolitics, and long-term scaling, which is not how you would approach a throwaway experiment. Stepping back, the reason this matters isn't the pen itself. It's what it represents. Open AI has spent the last few years building multimodal models, voice, memory, agents, and long context reasoning. At some point, the question becomes, what's the best physical interface for all of this? According to OpenAI, the answer is a pen. Something quiet, personal, and non-distracting. It actually kind of fits that philosophy surprisingly well, but who knows if it'll actually be good. But when you combine John Ivy's involvement, Sam Alman's comments, internal code names leaking, and the specific form factor being discussed, it really does feel like Open AI is laying the groundwork for a new kind of AI interaction, not just another app. And just like everything else we talked about today, it's not being shouted from the rooftops. is being prepared in the backgrounds. If you enjoy this video
9:06

Outro

this is what we do here. Fast, clear updates on the biggest moves in AI. If you want to stay ahead of everything happening in this space, make sure you're subscribed. And if you want the hands-on side, demos, tools, workflows, and everything developers can actually build, check out the world of AI. We also run a simple no noise newsletter that gives you the most important AI tools and updates in just a couple of minutes. Subscribe here. Follow World of AI. Join the newsletter.

Ещё от Universe of AI

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться