# This Update Changes Claude Forever & More AI News You Can Use

## Метаданные

- **Канал:** The AI Advantage
- **YouTube:** https://www.youtube.com/watch?v=aU5jApm8fWU
- **Дата:** 13.02.2026
- **Длительность:** 9:50
- **Просмотры:** 24,083

## Описание

Subscribe for weekly videos breaking down the AI news you'll actually use!

This week, Igor show you how to use Claude's new Interactive Reponses, breaks down AI ads at the Superb Owl and how ads in ChatGPT work (for now), reviews our testing results comparing GPT-5.3-codex and Claude Opus 4.6, and way more. Enjoy!

Links:
🔑  Free ChatGPT Prompt Templates: https://bit.ly/newsletter-aia
🧑‍💻 Igor Pogany on LinkedIn: https://bit.ly/IgorLinkedIn
🐦Twitter/X: https://bit.ly/AIAonTwitter
📸 Instagram: https://bit.ly/AIAinsta
Claude Interactive Responses - https://x.com/testingcatalog/status/2021264192301314315?s=20 and https://support.claude.com/en/articles/13641943-visual-and-interactive-content
Codex 5.3 - https://openai.com/index/introducing-gpt-5-3-codex/
Anthropic's ad poking fun at ChatGPT ads - https://www.youtube.com/watch?v=kQRu7DdTTVA&_bhlid=1b8719e74780a50e774695d79b3bab173db7541a
Funny (bad) Svedka AI ad - https://www.youtube.com/watch?v=pkeWRI2yJGM
OpenAI Rolls Out ChatGPT Ads - https://openai.com/index/testing-ads-in-chatgpt/
ChatGPT Deep Research Updates - https://x.com/OpenAI/status/2021299935678026168?s=20
Perplexity Model Council - https://www.perplexity.ai/hub/blog/introducing-model-council
Kling 3.0 - https://x.com/Kling_ai/status/2019064918960668819
ElevenLabs Audiobooks - https://x.com/elevenlabsio/status/2020906310837870873?s=20
ElevenLabs Expressive Agents - https://x.com/elevenlabsio/status/2021237336793657447

Chapters:
0:00 What’s New?
0:24 Claude Interactive Response
1:39 GPT-5.3-Codex vs. Opus 4.6
3:36 AI Super Bowl Ads
5:12 ChatGPT Deep Research Upgrade
6:18 Perplexity Model Council
7:19 Seedance 2.0
8:10 Kling 3.0
8:37 ElevenLabs Updates

## Содержание

### [0:00](https://www.youtube.com/watch?v=aU5jApm8fWU) What’s New?

Welcome to another week in generative AI and this week it's war out there. Open AI versus anthropic over the ads topic. We have Claude integrating new elements like interactive recipes and maps into the native interface. That's a first and then updates to chat deep research and many more. In this week's episode of AI news you can use the show that rounds up all the releases and we look at them and ask ourselves what actually matters here and the results of that I get to present back to you. Let's begin. And I want to

### [0:24](https://www.youtube.com/watch?v=aU5jApm8fWU&t=24s) Claude Interactive Response

start the story with probably the most practical thing that all Claude users will encounter from here on out, and that's updates to Claude's capabilities. They're actively trying to move away from this classic wall of text that AI responses usually are. So, I'm traveling right now. I'm in Phoenix, Arizona, and if I say something like best ramen near me, it fetches my area dynamically. And now, here's the new part. Before I would have done a web search and summarized it and like text, right? Now, I should be getting an interactive interface. Oh, just like this. It's a map. And now I can look around. I'm in Scottsdale, Shio, Ramen, and Crudeau. I can say directions and boom, it moves me directly over to Google Maps. Pretty neat. But it's not just maps. There's also a weather widget as you can see right now. And even if you look for recipes, you're going to be met with this interface where it includes images and the recipes at the top. And there's a custom button for the measurements, and it's just a custom interface for recipes. This is all well and good, but what I see here is that they hit some of the main categories of what people look for, but moving forward, they will want to have a custom interface for every result if it enhances the user experience. I think a year from here, all of this is going to look very different because each result ideally should have a custom interface. You should be getting a app that is more suited to giving me what I'm looking for rather than the default wall of text that we're all used to. And by the way, this is available to both free and paid

### [1:39](https://www.youtube.com/watch?v=aU5jApm8fWU&t=99s) GPT-5.3-Codex vs. Opus 4.6

users. And while Claude is getting prettier, OpenAI's Codex is getting scary good at creating applications. Even Claude has a competing model with that, right? Last week we pointed out there was the Opus 4. 6 release and the OpenAI Codeex 5. 3 release. Gosh, makes me think every time those version numbers. And as I said, I wanted to follow up on that story and look at some example prompts. So, the first thing we did is we ran our favorite Death Star over LA in a SVG prompt just because it visualizes the progress of this model so well. And oh boy, did that not disappoint. Okay, let's start with codeex 5. 3. This is what we got. It's better than codeex 5. 2 and better than the previous Opus models. I'm not sure that it's evident that this is LA just from the image, but the Death Star is nice. Fair enough. Same thing in Opus, though. I would say this is the first AI image where I would actually be able to tell that this is LA just from this. There's palm trees, there's the skyline, and the Death Star looks good, too. AGI achieved. Okay, that's one example. But beyond that, we asked for Ocean Wave Simulator, and this is what we got from Codeex. Can adjust the different sliders, higher waves, change the lighting. Okay, just look at the one from Opus 4. 6. This is ridiculous. It's actually 3D. Look at the light reflections. Can move the sun. I can do more of a sunsety feel. And there's even a stormy preset. It's pretty crazy. I realize that these one-off examples are not going to cover all of its capabilities. I mean, how could we in just a few minutes? But one thing that should be noted about Codex 5. 3 is that it's 25% faster than the previous version, and it can work for days without losing context. So maybe it's not these little demos that make it stand out, but real developer workflows where they use it over and over again and work with existing code bases. It's going to be rather hard for me to test right here, but I would end with the conclusion that hey, if you're a simple consumer that is building little apps, one-off sites, or just wants pretty stuff quickly, then Opus is probably the one to go with here. If you're a developer, it's a different conversation and not really what I focus here on the channel. And hey, if you're enjoying these stories, make sure to subscribe to the channel. It really helps us out and I do this every single Friday. All right, let's move on. AI Super Bowl ads.

### [3:36](https://www.youtube.com/watch?v=aU5jApm8fWU&t=216s) AI Super Bowl Ads

And I'm not just talking about this as in like, hey, here's the new story that you should know about. But I want to talk about ads in AI and how there's really two camps. Now, OpenA announced that they're going to have ads recently, and they started rolling them out only for the free accounts. Again, if you're on the free account in the US, they're clearly separated, yet they are still ads, something that we're not used to and something that nobody really wants. Nevertheless, right now, it's more like all services have free accounts, and the OpenAI one comes with ads, whereas the competitors don't. And Enthropic didn't hesitate. If you haven't seen these, they made ads poking fun at OpenAI and they're really good. They're actually hilarious. I'll put the links to the best one below, but they're so funny. And to be fair, Sam Alman even tweeted about the fact that, okay, they are funny, but they don't represent it fairly. The way Openai does these ads is they always separate the ad section and don't make it look as a part of the result. Whereas in the ads, the personal trainer is giving training advice and then all of a sudden he transitions into an offer with a discount code or whatever. — Use code hidemaxing 10 for big discounts. — NO GOD. ANYWAY, THOSE ADS WERE SHOWN AT the Super Bowl amongst with so many other AI ads. It felt like most ads there were about AI or produced by AI. There were some really bad ones like this robot here. What the heck was even that? But yeah, I don't know if this is the smartphone moment for AI or if we've had that um two years ago, but this race for consumer mind share and your subscription is on. AI has become so mainstream that it's the main topic at the Super Bowl. But yeah, when it comes to ad, if you're pay user, none of this concerns you. It's just interesting to keep an eye on cuz most people are not on a paid account and this is going to influence their experience. I want to

### [5:12](https://www.youtube.com/watch?v=aU5jApm8fWU&t=312s) ChatGPT Deep Research Upgrade

briefly talk about chat deep research that got an upgrade this week. So, we ran a comparison for you with the old Deep Research and the new Deep Research. The new one utilizes GPT 5. 2 and the reports come out a bit longer similar to the Gemini deep researches which users usually prefer. And there's new features too. You can search for specific sites. So if you know there's a certain blog or forum that you want to research, you can just add that link whereas before it self- selected these. I really like this manual control. And yes, you could prompt for it before, but it was not very reliable. Sometimes it just ignored these sites whereas this seems way more reliable. And then also this phase in between your prompting and then deep research running is way smoother in my opinion. And again, closer to what we saw from Google Gemini. It gives you the bullet points. It tells you what the plan is. And then once it does it, it just feels like we get higher quality reports with a table of contents on the side that yes, are longer, but also that just feel better and generally speaking, they feel more thought through, which I like. And by the way, the same thing goes also for apps. You can integrate the various apps in Chip into the deep research. Now, if you want the reports to also gain info from the app, you're connecting to Google Drive or something like that. Okay. Okay, then next up we

### [6:18](https://www.youtube.com/watch?v=aU5jApm8fWU&t=378s) Perplexity Model Council

have a few stories that we're not going to spend too much time on, but they're worth your attention. We call these quick hits. Starting out with the perplexity model council. I actually thought this one was really cool. It's an approach to you asking a question and then instead of getting an answer from one model provider, you get an answer from OpenAI, an answer from Enthropic, and an answer from Gemini and then a combined answer. I know a lot of power users do this anyway. They open up a tab with Claude and one with chat and they run the prompt in two things. I think that's actually a great practice for you to develop a bit of a feeling for how these perform and to really learn what you like. It's these AIs are often like personalities. There's no clear better or worse. There's just different fit for different people. And Perplexity Model Council kind of consults these different models and then gives you a result. I thought it was interesting and maybe something we're going to see more of in the future. You know, you're going to have specialized models and your prompt is going to ask all of them and you're just going to get the best of all answers. I mean, to be fair, people were already rumoring that the GPT4 used that approach. It's not just one model, but these specialized models and what you get as a user is a result of all of it. You do need a pro plan to even try this.

### [7:19](https://www.youtube.com/watch?v=aU5jApm8fWU&t=439s) Seedance 2.0

And with that being said, let's have a look at video generation cuz there were some major leaps this week. And I feel like this is very subjective. People are really hyping up the new seed dance. It's a Chinese model that right now is not that easy to access from here. We tried it. There's a site where you can use it, but we could run one prompt and that was sort of it. And even with other accounts, it didn't work. Then it's all a bit weird. But the results do look really impressive. Here's various examples from across the internet that we found in our research. And also here's a comparison of VO 3. 1 generating a sloth in a donut floaty and then seat dance 2. 0 doing the same thing. I'm using VO cuz many people consider that as the best video generation model, but this one really made some waves across the internet and people are talking about it. As you can see from some of the other examples, it is really impressive. But again, I think a lot of this is subjective. This one did definitely catch a lot of hype this

### [8:10](https://www.youtube.com/watch?v=aU5jApm8fWU&t=490s) Kling 3.0

week. Another one that came out was Cling 3. 0. Let's start by looking at the sloth in a floaty next to sea dance next to VO and it looks fantastic. But I think the real unlock here is that you can generate these 15-second product videos with multiple cuts in them like if it was edited and they claim to have the strongest character consistency yet. Also, it has this interface where you can just upload reference images, audio, storyboard frames, and it just turns all of that into video. If there's one to play with this week, it's probably this

### [8:37](https://www.youtube.com/watch?v=aU5jApm8fWU&t=517s) ElevenLabs Updates

one. And then finally, we have two releases from 11 Labs. One on its new audiobooks feature. The voices have gotten so good that it just sounds lifelike. I mean, you've heard me say that before, but now they created this complete toolkit for audiobook creation. And then lastly, they released a new expressive mode for their 11 agents. If you're not familiar, that's where you can set it up to talk to customers in 11 apps voices. And now they're even more expressive and receptive of other people's emotions. Now, you might not like the idea of talking to a robot on the phone, but probably not getting around it. And if we're doing it, I'd rather talk to a more expressive and emotionally understanding one rather than one that talks like this and completely disregards your feelings. — To be fair, I had a university professor who was exactly like that. Maybe 11 agent should have taught that class. I remember one guy used to fall asleep in every single lecture. It was just his nap time. To be fair, that professor did have serious nap energy. But that's it. That's everything I have for today. Hey, I hope you found something that was interesting or inspirative to you. And with that being said, my name is Igor and I hope you have a wonderful, wonderful week.

---
*Источник: https://ekstraktznaniy.ru/video/9626*