# OpenAI Strikes Back & More AI News You Can Use

## Метаданные

- **Канал:** The AI Advantage
- **YouTube:** https://www.youtube.com/watch?v=9FruIqE0OEw
- **Дата:** 06.03.2026
- **Длительность:** 19:47
- **Просмотры:** 1,066
- **Источник:** https://ekstraktznaniy.ru/video/10895

## Описание

Subscribe for more weekly AI news you can use!

This week, Igor covers the new model releases from OpenAI including GPT-5.3 and 5.4, plus he shows off the new Project Sources update for ChatGPT. He also breaks down the many releases from Google as they condense their AI tools down into the products that work, talks briefly about OpenClaw competitors and Anthropic's latest releases, and more. Enjoy!

Links:
🔑  Free ChatGPT Prompt Templates: https://bit.ly/newsletter-aia
🧑‍💻 Igor Pogany on LinkedIn: https://bit.ly/IgorLinkedIn
🐦Twitter/X: https://bit.ly/AIAonTwitter
📸 Instagram: https://bit.ly/AIAinsta
https://openai.com/index/gpt-5-3-instant/
https://help.openai.com/en/articles/6825453-chatgpt-release-notes
https://x.com/claudeai/status/2026720870631354429?s=20
https://x.com/stitchbygoogle/status/2027082165490794824
https://x.com/GoogleLabs/status/2026374947996840169
https://x.com/FlowbyGoogle/status/2026704701069074603?s=20
https://openai.com/index/our-agreement-with-the-department-of-

## Транскрипт

### What’s New? []

Another week where AI keeps progressing and as I say not all of these weeks are created equal. This is one of the good ones where you're going to have takeaways that you're going to start using today. And not every week is like that because for example we have chat GPT introducing a brand new models. Then we have a chat GBT feature that gives you the ability to give an entire folder in Google Drive to your chat GBT project making it really easy to add more context. I love this feature. and the new tiny model that works on phones and old laptops out of China and a bunch of Google updates. All of that and more in this week's episode of Music and News, the show that pulls together all these innovations, all this madness. We filter for the ones that nontechnical users can

### Claude Updates [0:36]

actually use today and I get to present it back to you. Let's begin. So, we usually focus on Chat GPT if there's a big new release because most users have accounts there. I know that opinions have been split recently on what's the best model and some people have been migrating, but the fact remains Chachi has the biggest market share and I still use it daily and now they shipped a new model and I want to show you this model. There's a few things they said about it. I thought it was a very untypical release. They tried to make it very relatable. I guess it's sort of a PR move, but let me tell you in my mind that actually worked because what they shipped is chat GPT 5. 3 instant. So, if I go into my model selector here on a paid account, you will see that the instant model is 5. 3 with the thinking model and the pro because I'm on the pro plan are still 5. 2. So, what's the difference between instant 5. 3 and instant 5. 2 that we used to have as of a few days ago? Well, Chachi, they had their own coms around it, but I tested it and let me tell you the difference is it just understands what you actually mean better. It's more human in that sense. It gets the context of your question. So, if you ask something jokingly, it will reply jokingly. If you ask for help on a certain problem, it's not going to do this AI thing where it first warns you about the implications and the risks of your activities and then give you the answer. It just gives you the answer. Let me show you an example. Starting in the new model, let's just take something light because this is where the difference really comes in. It's in how it responds, not what it responds with. So, ask it a question like, should I get a cat wearing a hat to live with me in my YouTube living room? Okay, just send that message in parallel. I'll open up a new tab with chat GPT and I'll switch to the old model, which you can do up here in the model selection, legacy models, and then chatd 5. 2 instant. I'll paste same prompt and run it. Now, let's start by looking at the old models response. So, should I get a cat wearing a hat to live with me in my YouTube living room? Short answer, yes to a cat. Probably no to the hat. Okay, let's see why cat could be great for your channels. It gives me some pros about the hat. Be careful here. Most cats hate wearing hats or costumes. It can stress them out. It might look cute for 10 seconds, but it's So, look, in other words, it's taking me seriously. It actually thinks that I'm considering a cat with a hat and it's giving me cons and telling me why I should not do it and a happy cat is more important than a meme cat. And then it's kind of schooling me on why

### Google Docs Feature [2:57]

this would be good for content, but I shouldn't actually do it. Now, let's look at the new model 5. 3 because obviously this question is not serious, right? Obviously, the new model has a similar response, right? images up here. It says, "If you mean a real cat wearing a literal hat living in your YouTube filming room, the answer is maybe a cat. " Probably not the hat. And then it kind of memes on me by giving me multiple emojis. Still gives it a pros and cons. But when it comes to the hat, look at this. Cats generally hate wearing hats. Most cats will shake it off, freeze dramatically in protest, plot revenge later. I like that one. See, it's already got that joking tone that I initiated with. For daily life, your cat will vote no hat. So, my verdict, cat in the YouTube living room, yes, good idea. Cat wearing

### OpenAI Updates [3:42]

a hat permanently, your cat will overthrow you. I just like that. Look, I approached it with a joke. It jokes back. It's not this super serious personality where it's like, "Oh, sir, actually, a hat is not a good idea. Here's the reasons why that is the fact. " It's just a bit more human. That's how it summarizes. And this is consistent amongst all their marketing materials and how it actually behaves. So I think this is a move of them trying to make models that are a bit more relatable, a bit more easy to deal with rather than the robotic responses and the finger wagging that Chat GPT has been doing a lot. And finally, I should add this model is available to everyone, including the free accounts starting earlier this week. So you can go try it out for yourself and see how you like it. But there's definitely a change from user perspective, a positive one. Hey there, editing Eigor here with um an update that already came out since this video was recorded. Open eye actually dropped GPT 5. 4 also. So the instant model is 5. 3 and then the thinking model and the pro model $200 account are both 5. 4 in Chachi. Now if you don't have it yet it's going to arrive within a few hours. Point being they have a brand new model and the brand new model is actually state of the art at many benchmarks. Yes, we don't care about those as much as we did like a year or two ago, but it's still impressive, especially when it comes to all the agentic tool use. And the real thing that it shines on and beats Opus on is its ability to use a computer. And I think this is the interesting one, but it's also one that you cannot just judge based on benchmarks. So really, this release for me, um, I mean, this came out now Thursday night. I recorded Wednesday night, we released Friday. Right now, at this moment in time, it's really hard to judge how good this model is going to perform. You can try it. Probably not going to see many differences because the big difference here is on how it's able to perform a genetically and while remote controlling a computer. Now, this is a big deal on the personal agent side of things, right? But people will have to start using this and we will need to get reviews. I tried with my open claw Alfredo. Switch it from Opus 4. 6. See if it can do things where there were limitations before. This will take a few days. You cannot judge this instantly and they're still rolling this thing out. One more interesting fact though is that they went to 1 million tokens on the API. So they matched Opus 4. 6 and

### Other New AI Models [5:58]

this is really what it is. They're trying to take the king of the hill position in uh all of these agentic applications in its ability to remote control computers, browsers, all of that. Benchmarks are impressive. Let's see what the world thinks. I'll report back next week on the Opus 5. 4 story. Oh, open AI GPT 5. 4 story. My bad. So many models, huh? Now, here's another chat thing that changed this week, and I think this one is actually a big deal in terms of productivity. So, while the first update was a bit more fun and nice to have, this one is something I was asking for since the day they brought out projects. Matter of fact, GPTs, I always wanted a feature where you could add context to your conversations, projects, GPTs, whatever, dynamically, where you can have a folder of files that you can update in a different place, where you can let a team collaborate, where you can update files in a certain place, and chat just pulls it from there rather than you having to copy paste it every single time something changes. Might seem obvious, but it took them years to implement this, but now it's finally here. So, let me show you. You can now do this inside of projects. If you're not familiar with projects, it's basically a collection of chats where you can attach certain files as permanent context to those chats. So, let me just create a test project for news you can use. And then also, let's go to my account and make sure all of my memories are turned off. Okay, memories are all turned off. So, this has no context outside of what I give it here in the project. Okay, this is just completely naked. Chat doesn't know anything about me here. So, if I ask it, what do you know about me? What I would expect here is an answer that tells me nothing. There's no memories. There's no files. There's no info you gave me. Somehow, it knows my name and kind of my job title, but nothing else. It gets my location from the VPN I'm using to access this. That's a note. Uh, for now, it's only rolled out in the US, so I had to use a VPN to get this feature. Important note for international people. But, it doesn't know anything else about me. And I think this information it actually got from Ah, my bad. Yeah, I had the info up here. nickname Eigor and

### OpenClaw Competitors [8:01]

here one sentence about me. So it only has that no other info. Let me go back into the project and show you how to change this. So if you go to sources, historically these sources only allowed you to upload files. I guess a few weeks ago they added text input where you can like manually add text. Now you have these new options, Google Drive or Slack. And I really like the Google Drive one because and here's what I love. In Google Drive you can create a separate folder and then you can take the link to that folder. I go to share, copy link, and then it will get access to everything inside of that folder. Finally, for anybody who's not familiar, up until now, the only thing you could do is link your entire Google Drive and every file that has ever been in there will be used as context. In other words, it will miss a ton. It will miss details. Now, you can finally curate it. You can create a folder in Google Drive. And what I did here is added a markdown file with some information on me. Okay, this is what we teach in some of our programs with the advantage how to curate these files that have deep context on you. And that's it. I just added this one markdown file. It's about three pages long in terms of typical PDFs. And now when I go back to my project and I just start a new chat and say, "What do you know about me? " You will see that it instantly pulls up all this information that is inside of that markdown file. And this would be my recommendation if you're looking for the optimal workflow. It's always a markdown file because that's the cleanest format for AI to read. And now you can put it inside of this Google Drive folder that dynamically links. And then you can share this with teammates and you could maybe have a Google doc in there. And while Markdown is ideal, you could probably also get away with a Google doc where multiple teammates update it, add info, and all of that is going to be dynamically pulled into your project. And you can see it knows everything from within that file. Now, this is amazing.

### Spotify & AI [9:43]

This is a thing I'll be using every day from here on out. And they didn't even make a big announcement around it, but that's why you watch this channel, do you? Oh, one more thing. You could also link a single document, spreadsheet, or slides from Google. And it works with Slack, too. Hey, at this point, I just want to point out if you're enjoying this show, make sure to subscribe. I create a video like this every single Friday, plus other tutorials, all for free. So, if you enjoy this, subscribe, and we get to do this more often. Let's continue. Next up, we have a development that is actually significant for consumers, nontechnical consumers, when it comes to having a personal AI agent that does things for you. We've been talking about this category for years now really and the development has happened in a way where it really got hyped up that agents are going to be the next thing that nothing happened for a very long time and in the recent months and especially weeks there has been a

### Lara AI [10:28]

lot of movement particularly around clawbot and open claw and people buying Mac minis and all that. I'm playing with that too. I made two separate videos just talking about that. But a lot of that is clunky. What's not clunky is claude's version of that application that is very consumerfriendly called cloth co-work. The problem with clot co-work is it tries to be this AI agent, this agentic AI assistant, but really it just misses some of the essential features, but they've been shipping some of these essential features with the biggest one coming in this week. They finally introduced a way to schedule tasks. Meaning if you get one workflow down in cloud co-work like it organizing some folder or it working with certain files or researching the internet and then placing those files in a specific location. If you get that set up once you can then tell it okay now schedule it and run it once every day or twice a day. You get the point. We gave it a quick shot because we wanted to know if this actually works well because they have this feature in chat GBT but it's terrible and it's hard for me to even recommend it. Plot did this well though as they usually do. As you can see here, we tried a simple task where Claude Co-work is going to research the web for new generative AI stories every single day. And then we scheduled it out and it actually works. It does it reliably. It produces these results. And now I could go in and improve the workflow. I could add additional steps. I could refine sources. I could give it additional tools. All of the stuff that you can do within co-work. There's a bit of a learning curve, definitely more than chatbt. But with this feature, it is starting to become worth it. And what's clear here is that Claude is gunning for this holy grail of a personal agentic assistant. I guess it was referred to as AGI in the past few years that is usable for everybody. That's what this app is supposed to do and this is probably the most essential feature to get there. One note is that this is not available on the free plan. Obviously, you need to be on the pro, max, or the enterprise plans to access this. But if you're into experimenting and trying new AI stuff, this is probably the one thing to try right now and to get comfortable with cuz this category isn't going anywhere. We'll keep an close eye on this category on the show cuz I think it's the most important one in AI for most consumers. Next up, we have a few updates from Google, but basically they're just consolidating different apps that they had individually into one product. So, if you've been following the show, we've been talking about apps like Opal and Stitch and Flow. They're all these standalone experiments where they tried it to create a really useful tool for designers or website designers or in Opal's case, I guess, u a really simple workflow builder. So, what's happening this week is all three of these apps, they got major updates. So, let me start with the simple one. Google Stitch is basically for web designers, and now they added a feature where you can kind of just click into the interface and make an edit directly in there just by pointing your mouse. It's really simple, really intuitive, just kind of a no-brainer feature that it didn't have so far and now it does. Google Opal was their no code builder, right? It was their automations for nontechnical people tool. And we featured it, we even made it the main story of a news you can use episode because it was so intuitive. It's kind of just like, okay, generate description for a post and then generate the image to go along with that description and then you're done. It was really simple, but what it lacked is a little bit of the depth that some of the more technically intricate applications provided, but now they're on that because they added a agent step which makes the workflows a bit more variable. In other words, you can use a LLM in the middle to make certain decisions for you. So you can set up multiple paths and then the agent decides which one of those paths is right for the current run making this a way more powerful tool and it's still all the way on this side on the spectrum of super consumerfriendly and super technical tools while giving you a lot of the powers that the very technical tools like nen unlock. Love to see it. And then finally Google flow. This is actually one that I know multiple people working in the creative space with design agencies or ad agencies and this was the favorite tool for a lot of them because it was really simple and it included access to Nano Banana which by the way there's a new model of Nanobanana. We're making a separate video on that Nanobanana 2. So keep an eye out for that. Should be out next week. We're collecting all the use cases for it. That's going to be really good. Back to Google Flow. It's just a really simple tool for designers to create images and video. And now they added a bunch of new functionalities but they're not really new. They're just functionalities that are available in other apps from Google and they kind of just merge them into here. So you can include collections, search and filter those. You can do much more precise video editing. You can use a little lasso to highlight a certain point that you want to change and then it only changes that. You can add or remove objects in a video. You can even control the camera motion. All of this stuff was possible in other apps, but now they're consolidating it and trying to create one studio application that is a one-stop shop for all of this stuff. Image generation is free and flow. The other stuff is paid. But yeah, those are some amazing updates. And then lastly, one more thing from Google is them releasing a new model, Gemini 3. 1 flashlight. This is really something mostly interesting to developers. It's just a model that is smarter than the ones before while being extremely competitive on the pricing side. So compared to 2. 5 flashlight, which is the predecessor, it costs 25 cents for a million output tokens instead of 10. But the quality is like way higher than the model that cost 30 cents per million tokens before. So, in other words, it got cheaper than the best model before at this level of speed and it's more capable. AI progress is not stopping anytime soon. And next up, all the quick hits we have prepared for this week. These are quick stories that I want you to be aware of, but maybe we don't need to speak about it for multiple minutes. Starting with a Chinese model, Quen 3. 5. Again, this is something more developer focused, but it's kind of the first model that can run on phones or older laptops and you get intelligence at the level of Lama 70B if you're familiar with those models. So nowhere close to some of the best models out of Google Anik or OpenAI, but actually decent performance like usable. Obviously, it's going to be slow, but people were really shocked by this because even a semidecent level of quality that runs on old computers and you know the newest iPhones, it's unbelievable. Again, AI keeps progressing. Then we have the Pentagon story where basically Enthropic rejected the US government's demands to use entropic models to support the defense department of the US and the president tweeted really aggressively to push back on that and then one day later OpenAI comes in and actually fills that spot and helps them out. So a lot of people had really strong feelings about what this means for the US and for the AI space and for AI safety. As you know, we're not the channel covering that. We're really here for the consumer use cases, but I figured this was important enough to actually mention here. Next up is a story that interested me initially, but it's a dumpster fire. Adobe Firefly Quick Cut. I was so excited when I saw this. It's supposed to be a AI video editing tool that makes editing quicker. Give it a roll and it cuts it up and adds B-roll. In practice, it just doesn't work as you want it, like at all. We tried it multiple times and it's just nowhere close to being worth using. Nothing. — Again, this is a first iteration. Fair enough. but not something we can recommend at this point in time. We'll keep an eye on this category. Then we have Proplexity Computer, which is interesting in theory. Like they're pulling together all of these different models. So, you ask a question and they throw it out to like up to 19 models. They get the different answers and then you get the best of those answers. I think there's potential in this category. I haven't seen or heard of anybody getting real value from this approach yet, though. But maybe you're watching this and you're like, "No, Eager, this is amazing. " In that case, I would love for you to leave a comment like what do you use this on where it really upgrades your experience with LLMs. It's an interesting approach. They're trying to differentiate themselves. They kind of accepted that they're not going to make the best model. Maybe they can create these user interfaces that are unique and combine different models. I don't know. Would love to hear if anybody disagrees with my takes here always. Then we have notion custom agents. Basically, I looked at this and kind of figured that they brought some features available in a lot of other apps now into notion. So now you can do things like look through your notion docs and set up a agent inside of your Slack that answers questions or they even had one example where it kind of auto routes developer feedback to the right sources and assigns the right people. But really I think they just built in some automations that people have been using already that are in this world right into notion made it a bit more user friendly. It's cool but nothing I see myself using week to week. Again if you disagree I would love to hear your take. And then finally, there was a study showing how teens use AI. And basically, it says that virtually every teen uses it to cheat on homework and that, you know, teachers and parents are really upset about this. But my take would be that, hey, they're just using these tools, right? So, I remember in school they tried to make us learn several rivers and mountains in Austria by heart. Like, we had to know every single river and every single mountain by heart. And then they quizzed us on that. But then now looking back, I would kind of question that approach, especially with the existence of Google. like I just have access to that information at all time. And with access to AI being a thing now, that's not going away ever again. Like you realize that, right? We need to rethink how homework is done. I don't think it's the fault of the students that they just use available technology to be more efficient. I think it's on the educational system to design assignments that work with the available technology. And that's pretty much everything we have for this week's episode of News You Can Use. I would love to hear which one of these was your favorite. And other than that, I hope you have a wonderful rest of the week. My name is Igor and I will see you very soon.
