Gemini 3.1 Leaked, Google's Lyria 3 Drops & DeepSeek V4 MISSING...
9:14

Gemini 3.1 Leaked, Google's Lyria 3 Drops & DeepSeek V4 MISSING...

Universe of AI 19.02.2026 6 756 просмотров 156 лайков
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Google's Gemini 3.1 has leaked and the SVG outputs are turning heads. Google DeepMind officially drops Lyria 3, their most advanced music generation model yet. And DeepSeek V4 — where is it? For hands-on demos, tools, workflows, and dev-focused content, check out World of AI, our channel dedicated to building with these models: ‪‪ ⁨‪‪‪‪‪‪‪@intheworldofai 🔗 My Links: 📩 Sponsor a Video or Feature Your Product: intheuniverseofaiz@gmail.com 🔥 Become a Patron (Private Discord): /worldofai 🧠 Follow me on Twitter: https://x.com/UniverseofAIz 🌐 Website: https://www.worldzofai.com 🚨 Subscribe To The FREE AI Newsletter For Regular AI Updates: https://intheworldofai.com/ #AI #Gemini #AINews Gemini 3.1, Lyria 3, DeepSeek V4, Google DeepMind, AI news, artificial intelligence, Google AI, DeepSeek, AI music generation, AI leaks, SVG generation, AI 2026, Gemini leak, AI models, music AI, open source AI, Chinese AI, AI update, Google Gemini, DeepMind, AI tools, generative AI, AI generated music, vector graphics AI, AI benchmark, frontier AI, Gemini 3, DeepSeek V3, AI weekly, latest AI news 0:00 - Gemini 3.1 Pro Leaks 2:28 - Lyria 3 5:25 - Testing Lyria 3 6:21 - Music Demo 7:30 - DeepSeek V4 Update 8:56 - Outro

Оглавление (6 сегментов)

  1. 0:00 Gemini 3.1 Pro Leaks 458 сл.
  2. 2:28 Lyria 3 558 сл.
  3. 5:25 Testing Lyria 3 209 сл.
  4. 6:21 Music Demo 152 сл.
  5. 7:30 DeepSeek V4 Update 254 сл.
  6. 8:56 Outro 65 сл.
0:00

Gemini 3.1 Pro Leaks

Today we got what looks like the first leak outputs from Google's Gemini 3. 1 Pro model, and they're getting a lot of attention. The outputs are SVGs, which are also known as vector images, and two of them have been shared so far. A pelican riding a bicycle and a hamster on a motorcycle. And the reason people are excited isn't really the subject matter, is how good they look compared to other models. They're clean, detailed, stylized, and these don't look like something an AI just threw together. Sources are saying that these are real Gemini 3. 1 outputs. They're also saying Google isn't planning to water this down before release, also known as nerfing it. And what you're seeing in the leaks is what you would be actually getting when the model ships. Now, here's the thing that's been making people pay closer attention. This isn't just one leak. Pretty much every early output that surfaced from Gemini 3. 1 has been an SVG. That pattern is hard to ignore. It suggests that SVG generation might be something Google is specifically focusing on with this new model. I don't know why, maybe even using it as a benchmark to measure how good the model is. And that's interesting because SVGs are actually pretty hard to generate. Well, unlike a regular image, an SVG is code. The model has to write out instructions that describe every shape, every curve, every color, and have it all come together into something that looks good. Most models struggle with this. If Gemini 3. 1 has actually cracked it, that's a meaningful step forward. Google obviously hasn't announced anything officially yet, but given how consistent these leaks have been, we're probably not too far from hearing something. And if we take a look at these leak, for example, the first one, the Pelican, it looks pretty good. We have seen earlier versions of this test from different models, and they don't look as professional or put together as these leaks are. And then we look at this second leak, which is the hamster riding a motorcycle. And it looks like an actual animated picture that somebody maybe made in Adobe Illustrator, not an SVG or not something a AI model coded. So, we can see that these models are definitely getting stronger at SVG. And Gemini 3. 1 Pro might be the model that might become number one if there was a SVG generation benchmark. So, let's hope we get an official announcement from Google dropping Gemini 3. 1 because yesterday we just got Cloud Sonnet 4. 6 and Cloud Opus 4. 6 a week before. So, Google is probably due for a release soon. Next up, Google DeepMind just
2:28

Lyria 3

dropped Lra 3. Lra 3 is their most advanced music generation model yet. And unlike a lot of AI announcement, this one is actually available right now. is rolling out in beta on desktop today and hitting the Gemini mobile app over the next few days. It's going to be globally for all users, so you can try it out for yourself. You can access it by going into Gemini and selecting create music from the tools menu. So, what can it actually do? There are a few different ways to use it. The most obvious one is just text to music. You describe what you want and it generates a track for you. And they're not being conservative about the range here. You can go from broad and simple, like an upbeat birthday tune, all the way to very specific, defining the tempo, the dynamics, the genre, the mood. They gave an example prompt of a '9s skate punk rock track telling a roommate to wash the dishes, which is kind of funny, but it also shows you how much personality and specificity you can actually put into these prompts. Beyond text, Layer 3 also does image and video to music. You can upload a photo or video and the model analyzes it and composes a track that fits what it sees. They use the example of someone uploading a hiking photo of their dog and getting a track written around that. That is a genuinely different kind of interaction. It's not just describing music. It's having the model interpret something visual and translate it into sound. Vocals are also a big part of this. You can write your own lyrics or have LRA generate lyrics around a theme or subject and then it performs them in what they're describing as realistic natural vocals. You also have control over vocal style. So, you're not just getting a generic AI voice. You can actually direct how it sounds. And the output quality they're going for is professional grade. The positioning here is that whether you're making background ambience for a project or something you'd actually perform, the audio should be clean and ready to use. There's also a template feature for when you want a starting point rather than a blank canvas and dynamic suggestions to help you figure out how to describe what's in your head, which is actually a smart addition because prompting for music is harder than it sounds. On the safety side, every track generated through LIA is watermarked with synth ID. Deep Mind's watermarking system that tags audio as AI generated without affecting how it sounds. And they've added something new here. You can now upload any audio file to Gemini and ask whether it was made with Google AI and it'll check for the synth ID watermark. That's a meaningful step toward actually being able to verify AI generated audio in the wild. The honest framing for all of this is that L 3 seems aimed at a pretty wide range of people, not just musicians, but anyone who wants to make something with it. The templates, the image to music feature, the dynamic suggestions, those are all accessibility features. But the technical control is there for people who want it. It's live right now. So go try it for yourself. So
5:25

Testing Lyria 3

I wanted to test out this new model by myself. So what you'll notice is that once you're in Gemini, what you can see at the bottom is that you have your create image for Nano Banana. But now you have this new button called create music from Miller 3. So I'm going to click on this. You can see the first thing it does is suggest you a track that you can remix. So you have a starting template, but I'm going to try out something that it's supposed to be good at, which is text to an actual song. So the prompt I've given it is a upbeat electronic intro, futuristic and energetic male vocal lyrics, universe of AI, where the future comes alive. I'm trying to make it create like a YouTube introduction for my channel or something. And then you can see that we also have the dynamic suggestion where it suggests other things like where future comes alive or where the future comes alive with pulsing synths and high energy drums or where the future comes alive about the wanderer like all these are just features but I'm just going to keep with what I have and then I'm just going to press enter
6:21

Music Demo

where the future comes alive. We're living in the universe of AI. Watch the future come alive. A digital dream beneath an endless sky. The universe of AI. I'm not going to lie, that was kind of pretty catchy. And if I wanted to, I could use this on my YouTube, but some of you guys would probably hate it. But this is just a simple example of how strong these new models are getting. Right? We're seeing AI touch image generation. We've seen it touch obviously coding software engineering. Now it's entering a little bit of a creative space and we can see that it's making some trends in the music industry. So this was just a simple example. If you were to work with this model and use it on the developer side, you could probably create personal songs, create instrumentals for music videos, for movies. The possibilities are endless.
7:30

DeepSeek V4 Update

There's a post going around from someone saying, "Wake me up when Deepseek version 4 drops. " And honestly, a lot of people feel that way. So, let's talk about where things actually stand. Deepseek version 4 was widely expected to drop around midFebruary with some reports pointing to around February 17, coinciding with the Lunar New Year, which mirrors how they handled the R1 release. That date has now passed and it hasn't dropped yet. What we do know is that Deepc quietly updated their model earlier this month, expanding the context window from 128,000 tokens all the way up to 1 million tokens. When people noticed that, speculation started immediately that it could be a version 4 preview. Deep Seek said nothing, which is very on brand for them. Meanwhile, other Chinese AI companies like Bite Dance, Alibaba, and Zepu have all been releasing their own models, essentially trying to fill the spotlight while everyone waits on Deep Seek. The whole Chinese AI space is moving fast right now. As for what version 4 is actually expected to bring, the focus seems to be coding. Internal benchmarks reportedly show it outperforming both cloud and GPT series on long context code generation tasks. It's also expected to be open source, continuing the pattern they set with version 3 and R1. So, the short answer is it hasn't dropped yet. It's close and when it does, it'll get a lot of attention and I'll cover it the moment it lands. Make sure to subscribe
8:56

Outro

to our channel. We do real tests, not just headlines. Make sure you're also subscribed to the world of AI. And don't forget to check out our newsletter for deeper breakdowns you won't see on YouTube. And I'm growing my Twitter following, so make sure you follow me on Twitter as well. Hope you guys enjoyed today's video and I'll see you in the next

Ещё от Universe of AI

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться