By FAR the Biggest Week for AI in 2025: Google I/O, Claude 4 & More AI Use Cases
21:03

By FAR the Biggest Week for AI in 2025: Google I/O, Claude 4 & More AI Use Cases

The AI Advantage 23.05.2025 21 442 просмотров 743 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Find the perfect AI tool to improve your workflows with the AI Assistant Showdown guide from Hubspot! It's completely free, grab your copy today 👉 https://clickhubspot.com/7cf3a3 What an insane week in AI! Google I/O gave us Veo 3, Imagen 4, and a ton of AI updates to Search and other apps you're probably already using. Then Anthropic decided to release the new Claude 4 models, plus we still need to break down ChatGPT Codex and the big announcements from Microsoft Build 2025. There is a TON to get to and it's all exciting stuff, you don't want to miss this one! Links: https://chatgpt.com/codex https://code.visualstudio.com/blogs/2025/05/19/openSourceAIEditor https://www.microsoft.com/en-us/microsoft-copilot/blog/copilot-studio/multi-agent-orchestration-maker-controls-and-more-microsoft-copilot-studio-announcements-at-microsoft-build-2025/ https://github.com/microsoft/Magentic-UI?tab=readme-ov-file https://www.microsoft.com/en-us/research/blog/magentic-ui-an-experimental-human-centered-web-agent/ https://stitch.withgoogle.com/ https://aistudio.google.com/apps/bundled/promptdj?showPreview=true&showCode=true&showAssistant=true https://www.youtube.com/watch?v=W09bIpc_3ms https://www.youtube.com/watch?v=ctcMA6chfDY https://www.youtube.com/watch?v=I1XOR5KT99Q https://www.futurehouse.org/research-announcements/demonstrating-end-to-end-scientific-discovery-with-robin-a-multi-agent-system https://platform.futurehouse.org/ https://x.com/higgsfield_ai/status/1923158316764758122 https://x.com/danshipper/status/1925594292103635137?s=46&t=0L4sFlZyUGqMCRwFwGsuRg https://x.com/lovart_ai/status/1921958554312831133 Chapters: 0:00 What’s New? 1:02 Google I/O 2:20 Project Mariner 4:15 Stitch 5:29 PromptDJ 6:52 Claude Keynote 8:54 HubSpot 10:46 Codex 14:56 Microsoft Build 16:53 Lovart 17:31 OpenAI & IO 18:13 Higgsfield Update 18:28 FutureHouse Update 19:15 AI in Fortnite #ai Free AI Resources: 🔑 Get My Free ChatGPT Templates: https://myaiadvantage.com/newsletter 🌟 Receive Tailored AI Prompts + Workflows: https://v82nacfupwr.typeform.com/to/cINgYlm0 👑 Explore Curated AI Tool Rankings: https://community.myaiadvantage.com/c/ai-app-ranking/ 💼 AI Advantage LinkedIn: https://www.linkedin.com/company/the-ai-advantage 🧑‍💻 Igor's Personal LinkedIn: https://www.linkedin.com/in/igorpogany/ 🐦 Twitter: https://x.com/IgorPogany 📸 Instagram: https://www.instagram.com/ai.advantage/ Premium Options: 🎓 Join the AI Advantage Courses + Community: https://myaiadvantage.com/community 🛒 Discover Work Focused Presets in the Shop: https://shop.myaiadvantage.com/

Оглавление (14 сегментов)

  1. 0:00 What’s New? 230 сл.
  2. 1:02 Google I/O 280 сл.
  3. 2:20 Project Mariner 481 сл.
  4. 4:15 Stitch 279 сл.
  5. 5:29 PromptDJ 206 сл.
  6. 6:52 Claude Keynote 472 сл.
  7. 8:54 HubSpot 450 сл.
  8. 10:46 Codex 915 сл.
  9. 14:56 Microsoft Build 472 сл.
  10. 16:53 Lovart 162 сл.
  11. 17:31 OpenAI & IO 169 сл.
  12. 18:13 Higgsfield Update 55 сл.
  13. 18:28 FutureHouse Update 162 сл.
  14. 19:15 AI in Fortnite 418 сл.
0:00

What’s New?

This week in generative AI was crazy. I think that's really the word and you probably already know that if you didn't spend the last week under a rock, meaning completely offline. We had Google IO where they unleashed an avalanche of AI products and features that in many of the categories instantly became state-of-the-art. Then we had Claude's developer event in which they unveiled Claude Opus 4 and Sonnet 4 which according to some of the most important benchmarks are instantly became the best coding model in the world. vibes matter and also they released a ton of agentic features that will be supporting like 20% of the apps we actually talk on this show because so many of them are built on cloud models and then we have things like openi announcing a hardware company and releasing a bunch of API features to keep up with these other events and so much more in this week's episode of AI news you can use the show that looks at all of this week's geni releases and picks the ones that you can actually use and I suppose now we have this rapid fire segment in the middle where we also mention interesting announcements that you should know about if you want to stay on top of Gen AI which can be hard sometimes. I hope this show makes that
1:02

Google I/O

easier. Okay, let's start by discussing the two big events Google IO and Enthropics Claude 4 event. As you might know, I created standalone videos on each one of them because they deserve a standalone discussion, especially if you want to try out some of the tools. The separate videos on the channel just focusing on those releases will give you a hands-on look at the most significant releases from the events. It's not just me summarizing what came out, but actually trying the product and showing you comparisons to competing products, too, as we love to do on the show. Now, if you didn't see those and don't want to watch them, I'll give you a super quick summary starting with Google IO and add some thoughts since then. So, with the IO event, Google essentially pulled into first place in many of these categories from what's available right now. I think most significantly, the image and video gen is absolutely mind-blowing. This ocean, it's a force, a wild untamed might. the new Flow product that I reviewed and the fact that it generates video and audio at the same time just makes it feel so different from any video generator that does video only. If you check out one thing from that presentation, just check out the little demo videos of Flow and understand that we're in a world now where AI cannot just generate videos, but it can generate videos with dialogue, music, and background sounds out of the box. And if you're not really looking for the fact that it's AI generated, in many cases, you wouldn't be able to tell. And what I wanted to
2:20

Project Mariner

add here is that there's actually a few more apps that I would like to spotlight that I didn't have time for in the original video. And to be honest, they sort of either missed my radar or we just didn't have the time to test them amongst all the madness around the event. One of them is Project Mariner, the computer agent. You can either use it through the browser or as a browser extension. In our testing, you can see the browser use. And the second one is Stitch, the app that builds interfaces from scratch for you. So, let's start by discussing Project Marina. So, we tried it on various scenarios and with these computer use agents, it's always a bit tricky because they're pre-trained on certain websites on which they perform well and then on all the other tasks that you might care about that are not using Expedia to book a hotel, they tend to work a little worse. And I think it's sort of similar with this one. All the presets work great. As soon as you kind of leave the boundaries, it stops working so great. But there's the ability to teach it on a task. And to be honest, we haven't had the time to properly test this out yet. And I want to keep playing with that along with another release which is magentic UI a release from this week's Microsoft builds conference which is something very similar but you have to run this locally. It doesn't work in a browser. You need a docker container and then this too is a computer agent but it's designed in a way that it's supposed to work together with the human so you can feed back it along the way. I think this is very interesting. We're also going to be trying this one more and if it's worth it, we might even create a dedicated video on this cuz you can actually run this locally. We'll just have to see if we can use this productively somehow and then we would teach you that here on the channel. You can check it out on GitHub too as this is fully open source under an MIT license. But yeah, let's return to Google releases because this one is from Microsoft Magentic UI and Project Mariner is sort of a browser based version of that. So my initial review would be it seems like Google now also has their own version of operator that works okay but from what I've seen so far this is not the game-changing computer use agent that some of us are waiting for. It's good and I'll have to explore the ability to train it on specific tasks more. But I just thought this would be interesting to spotlight, especially next to the open source version that Microsoft released this
4:15

Stitch

week. And then the second story that I wanted to follow up on from Google IO is Stitch, an app that creates interfaces like a designer would. Mobile apps, web apps. It's pretty impressive. You can try it for free. And here's how it works. So you can go in here and say, "Okay, I want a personal photo library. " And then it just builds the interface for you that can be directly imported into Figma or you can just take the code. Now there's many more examples here like this mobile app of a board game app planner. It includes the prompts here. So you could also just take these and try your own. So I'll just change this to tennis as I've been exploring that sport more and more recently next to paddle if there's any raet sports enthusiasts out there. Fun fact, I actually played four years of tennis as a kid, but that's been a while. My backhand is still terrible. And before I even manage to finish telling you about my childhood tennis experiences, this thing is halfway done generating essentially the starting point for a mobile app. Look at it appearing already. Okay, my bad. I forgot to switch out the second mention of the board games. That's why it appears in one of the screens, but I think you see the point here. Here it did well with the tennis. Created a custom icon. Here's the leaderboard. Super neat. Slowly but surely, we're entering the era of ondemand application creation. And you can just try this for free right now and create both mobile and web apps just like that. Another
5:29

PromptDJ

interesting one is called Prompt DJ and this uses the new LIA model and they actually used this in the intro of IO. Super fun. Let me put on the headphones here. And what you can do inside of this Google AI Studio demo, you can just click plus over here and you could prompt. So I absolutely need a ukulele here. So I'm going to prompt for that. And then when I hit play, it's going to generate all the different instruments in here. That's the ukulele right there. We add a little bit of flush strings. Increase the temperature to make it a bit more creative and unexpected. And how about a little bit of post punk in there with these arpeggios coming back. Okay. and crank up the dub step. Not sure if that's exactly dubstep, but the ukulele worked and I think you see the point. You can kind of just add different instruments via prompt here and create music live. I don't know. Could be a fun one to try with your friends. I really enjoy playing around with this and yeah, you can just try this for free. I believe I don't even have a API key in here. Okay
6:52

Claude Keynote

next up the claud events that presented Claude Opus 4 and Sonnet 4 and a variety of updates to their API which all enhance the gentic capabilities of these models. I think the most important things to know here, it has the best benchmarks for coding ability on real world tasks in the world. Two, the practical usage of that alliance. It builds these things that just work. Whereas beforehand with a lot of apps, it is really good at oneshotting applications that just work. meaning from one prompt you can get to something that is complete in many cases of course not in all but it's exceptional at that and it's absolutely insane at writing and with absolutely insane I mean probably the first model where it just sounds like a human without much additional prompting at all and I created a separate video on that you can check out all my thoughts and details on it there I do want to point out one extra thing that came out ever since that I was super interested in Dan Shipper here on Twitter points out that claude for opus can do something no other AI model I've used can do it can actually judge whether writing is good. And that's true. If you ever tried this before, critiquing your text, all the models were always just super positive about what you did well and then maybe they gave you some pointers. But honestly, they were not really good writing coaches. And I think that makes sense. If the model itself cannot write really well, how is it going to be able to discern if your writing is really good? A lot of this is subjective. I do realize that. But I think there might be some sort of correlation between this being probably the best or at least most human sounding writing model out there and the fact that it can actually critique writing well. Now he points out in his post that it's extremely good at prompt adherence with long context. Meaning if you give it a lot of files or give it very detailed prompts, it doesn't just randomly ignore certain parts, but it actually follows through on all of it. Now, this is not the first model to do that well. But point being, if you want to use it for things like editing or criticizing some piece of writing, it's also extremely good for that. So, super powerful things coming out of Claude this week and I'll certainly be following up on the various apps that actually use Claude under the hood in the following weeks as soon as they integrated and really learn how to use this power to enhance those app. I'm talking pretty much the entire Vibe coding space. One thing that I've been
8:54

HubSpot

struggling with is figuring out which AI model LLM platform or chatbots to be using at any given time. And I'm betting that at some point you had the same issue. If you're confused by all the options or simply want to know what other experts think, then you should check out this free guide called AI assistance showdown chat GPT versus Claude versus Gemini from HubSpot who I'm partnering with for this video. And look, if you're confused, this is not on. The space is a bit of a mess right now. It's just how it is. Actually, there's a survey from Deoid that's called State of AI in the Enterprise and it shows that 83% of business leaders now list AI as a top strategic priority. And one of the first steps is picking a model that you will use. The wrong choice can legitimately cost you hours of time and effort. And honestly, it often just leads to a point where the user feels like he's not getting the results that he signed up for, but he might just be using the wrong model for the wrong task. And that's why HubSpot created this guide to help remove the guesswork. Inside, you get a cheat sheet with the big free in the space comparing GP40, Claude 3. 7 Sonnet, and Gemini 2. 5 on context window, imaging skills, coding ability, pricing, and everything else you might care about. You also get these deep dive playbooks for each model that explain where it shines and it walks you through a sample day so you know exactly when to switch tools. The guide even shows you how to multi thread AI in a single workflow which essentially means that you would be doing something like starting the research process in chat GBT then using claw to write the post and then finally using Gemini to polish. But for me personally I think there's one section in there. It's the big visual comparison table where you get an overview. We actually make versions of this ourselves in our workflows. So, I know from experience that this is the best way to quickly get a grasp of what's going on in terms of similarities and differences between the models. If you're interested in any of this, you can hit the link at the top of the description to download the AI assistant showdown guide and start using the right AI tools for the right tasks today. Thanks again to HubSpot for creating this free resource and sponsoring today's video. All right, now on to the next piece of AI news that you can use. So next up, we got to talk
10:46

Codex

about OpenAI Codex here that actually came out last Friday. But hey, I record this video once a week, Wednesday and Thursday night. So that barely missed it. Now if you haven't heard about this, probably the biggest developer focused release that OpenAI has made maybe of the year and I'll try and give you a summary here for a non-technical audience because the developers amongst you most likely know about the power of this already. So, it's a brand new standalone website accessible through chatgbt. com. As of now, it's only available through the pro plan, which is $200 with plans for the plus plan at $20 to have this soon. If you open up Codeex here, you will quickly find out that this app exclusively works with GitHub repository. For anybody not familiar, GitHub is the place where most developers store their projects. That's where they collaborate, some of them public, many private. It's essentially a website that's like a library for open- source and closed source projects too. And on this channel, we've done multiple tutorials that are supported by a GitHub page where you can get all the code for the project as it's just the industry standard way to work with code. Now, Codeex connects to this as soon as you log in. You won't get in here if you don't connect your GitHub account. And as soon as you have done that, you can select your project and one of the branches here. Now, I'm not going to do a full GitHub explainer. I'll just tell you that these branches essentially represent versions of the app. So the main branch is sort of like the trunk of a tree and then the branches are sort of the different versions that you might want to retain access to because if you add new features you might want to roll it back. Okay. So here you select the repository. branch and then you can tell it anything and you can run multiple tasks in parallel and the skill of this thing that's the main thing. It's unparalleled. They haven't entirely revealed what's going on under the hood here but they clearly trained this on many productionready applications. The quality of the stuff this creates is just so high at the advantage to have multiple pro accounts to test these things. And both relatively unexperienced, but curious developers like me and experienced fullstack engineers alike are blown away by this product. And if you're nontechnical, you can use it to explain things. If you're technical, add features, refactor code, buck fix, or flat out rewrite the entire code base just to see if you like what it did. And you can run many of these in parallel. I'm not even sure what the limit is. I think at most I was running like six or seven and it just works and sometimes it takes a while but the quality of the code this produces is unbelievable. AI coding is really moving at light speed. And I want to show you one more thing which is important in codeex which is if you go to environments this is where you actually manage the little virtual machine this is running in. So you can click on that, go to edit, and in here you can store one environment variables, which if you're not familiar, are usually things like login data such as emails and passwords and secrets. And then you can run scripts on this virtual machine to set it up with any tools or libraries that you might need. But it comes with a universal library that has a lot of the stuff you might need when you run various applications already pre-installed. So to round this out, I'll just show you an example. If I want to add a voice feature to my little chatbot here that I taught on one of the most popular tutorials on this channel, I'll just say add voice input. Lean back. And as you can see, it wrote all the code for that. It's deleting these three lines and adding all of this code and documentation to the readme file. And now what I could do is push this by creating a new PR. Again, this is GitHub specific stuff. This is not a complete GitHub tutorial. Although I do have a step-by-step tutorial video where I teach you these basics of GitHub coming up on the channel. So keep an eye out for that. And when I go here to pull request, I see add voice input option. And then if I wanted, I could click merge pull requests here where it would actually merge this brand new branch that it created. As you can see here, codecs add voice input into the main app. Or I just keep it separate. And now I have a version that includes voice input. And that's how this works. You can spin up like six of these tasks in parallel. This is a bit different from just vibe coding something as it's more specialized towards working with existing code bases rather than creating them from scratch which a lot of these vibe coding tools are focused on these days. But if you already have a app and you want to fix something or add something or refactor the whole thing, well all you need is $200 and codecs can do it for you. Okay, so following up I
14:56

Microsoft Build

want to talk about Microsoft build. It's not going to be the biggest segment because it was very developer focused and a lot of the stuff they talked about were announcements only. So things that will come in the future. But I think most significantly, they open sourced GitHub Copilot. If you're not aware, that was the one extension for coders that really started this whole AI coding movement back in 2021, I believe. And you had to have a subscription plan to use it. It assisted you as you were coding as sort of an assistant. Now, they open sourced the whole thing. There's a whole discussion to be had on why they did that, and I think that's out of place for this video, which is more about the ways you can use it. But essentially they put the whole thing out there under an MIT license and you could tweak it now as if you want to build something that is kind of a remix of GitHub copilot. All of it is out there now and you can do it legally. Obviously you could also use it to aid your development but that space has sort of developed I would say a bit away from these assistant tools more to these agentic tools like these cloud models inside of cloud code or applications like cursor or the new one from Google called jewels. But yeah, this is still very significant and available today. And what they shipped is a bunch of updates to their co-pilot studio. And at this point, I have a question. Do many of the viewers of this show actually use this product? Because I know a lot of enterprises are restricted to using co-pilot only as their AI to work with. But I just wonder if that's a thing in this community that watches these videos and follows this channel. I mean, some of the releases there looked interesting, but I have to be honest, I didn't try and I didn't even want to. I wasn't sure if it's aligned with the interests of this channel. It's not something I need myself cuz it deeply integrates with the entire Microsoft ecosystem, which I guess I use Excel and Word, but that's about it. And it's no secret that some of these alternatives like opening and Google are just more powerful in many ways. But this is the secure solution and they're adding interesting features here like multi-agent workflows that are pretty accessible to anybody. So yeah, if you care to hear more about this type of things, leave a comment below or upvote a comment that agrees with this. I wasn't even sure if you want to spend more time on this, but they're definitely moving their corporate suite of products forward. Okay, then very
16:53

Lovart

briefly, we ran into a new app that launched. I think this interface is the most interesting thing about it. It's a art generator that takes sort of a similar approach to Canva. So, we did some testing of it ourselves and it's very interesting because you talk to this bot that creates these marketing materials for you and it's all editable within the interface. You don't have to reprompt to change a specific part of the text and they also have these features that just create a massive canvas where you can create entire campaigns and more. It's called Lav Art. I think if you're in the marketing space, there's real value here where you can create mockups, posters, things like that really easily. And yeah, I always love to see new companies challenging the bigger players like Canva in this space. And yeah, I think this might be worth your attention if you're looking to create marketing materials with AI.
17:31

OpenAI & IO

Very nice. So, next up, we're going to talk about this week's quick hits, the new segments where we brush over a bunch of stories in rapid succession. All of these I consider worth your attention, but maybe we don't have to spend multiple minutes talking about them here. Starting with OpenAI's move into wearables. Now, we don't know exactly what that is about. There's just this conversation between Johnny IV and Sam Alman in a new YouTube video here that is intriguing but doesn't really communicate anything concrete. They're just saying that OpenAI acquired IO which is a hardware company led by Johnny Iive. And then if you want to learn a bit more about Sam Alman's vision, I would recommend the Sequoia Capital talk where he discusses what the core AI subscription of your life would look like. I feel like these two videos are really complimentary if you care to learn more about what OpenAI might be doing in a year from now. Then we also
18:13

Higgsfield Update

have a new feature coming out of Hicksfield AI, the AI video platform that has the most concrete controls on cameras and the most realistic human anatomy. Now they have a concrete feature that focuses on ad generations. So if that's your use case for AI, you might want to check this one out. Next
18:28

FutureHouse Update

up, something well actually quite massive. Future House, a company that we featured a few weeks ago with their set of different research agents, actually demonstrated the first end-to-end scientific discovery that has been done with just AI. Let's have a listen. If we were to just continue this cycle of doing experiments the agent suggests, feeding them to Finch, our data analysis agent, and generating further hypotheses follow on from that, we might get to new mechanistic hypotheses of how to treat diseases. So, yeah, you heard that right. One agent suggested the research. The other one performed it and the vision is to just keep doing that indefinitely until we discover the entire universe. I don't know. Very interesting and inspiring. And if you want to try out those agents, they actually made them available on their website as I shared with you a few weeks ago. So nice to see them actually produce results. And then next up, this
19:15

AI in Fortnite

one is a little stupid, but I think it's relevant to hear about how AI is integrated into these various sectors. This is Fortnite, one of the biggest games on the world, and they actually implemented a Star Wars character in partnership with Disney that owns the IP. It's Darth Vader, and they gave him an AI voice that is a model of the original speaker. And as you might imagine, people immediately started abusing this, prompt injected him, made him say various profanities. So, the developer had to patch this out immediately, and now it's just a very censored Darth Vader. But, I thought it's interesting that they actually made a bold move like this. And I think in the future we're only going to see more video games with AI voices because well that's just a more immersive experience and I think that's where things are going. And that really wraps it up for this week. What madness this is. So many features, so many releases. And look for most people there's no way of trying all this stuff. I mean that's kind of the idea behind the show. Me and the team test them. I present it back to you and then you can kind of soak in the information or pick the one that might be relevant to you. I'd actually be super curious which one of these use cases is most relevant to you. just leave a comment below and let me know. And then based on the comments, I can form future episodes to make it more relevant to the most engaged audience here. Which, by the way, fun fact for anybody who's still around, I looked at the data for this weekly show over the past few months. Just what would you guess is the average age of the viewer of News You Can Use? Just have a quick guess. Take a second. Done. Okay, it's 44 years old. Now, I knew that we had a more mature audience, but I'm actually really, really proud of that stat because that's the average age. So, me and the team will consider that for future episodes. But honestly, I'm kind of honored to be presenting these things back to a more experienced audience. And I sincerely hope you found whatever you came looking for when you clicked this video. And that's pretty much everything I got right now. My name is Eigor. Thank you so much for watching and I'll see you very

Ещё от The AI Advantage

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться