I Tried Every Important Google I/O Release (That's Available)
27:58

I Tried Every Important Google I/O Release (That's Available)

The AI Advantage 21.05.2025 13 746 просмотров 583 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Re-upload with a brand new segment on deep research and canvas and all the audio in tact! Google just released an incredible amount of new AI tools, models, features, previews and more. Here is an overview of the most important ones and a first look at the available tools. Links (in order): https://one.google.com/about/google-ai-plans/ https://blog.google/technology/google-deepmind/google-gemini-updates-io-2025/#performance https://blog.google/products/gemini/gemini-app-updates-io-2025/#gemini-live https://techcrunch.com/2025/05/20/google-meet-is-getting-real-time-speech-translation/ https://gemini.google.com/u/2/app https://labs.google/fx/tools/flow https://blog.google/products/shopping/google-shopping-ai-mode-virtual-try-on-update/ https://jules.google/ https://labs.google.com/mariner/landing https://labs.google/ https://labs.google/experiments https://aistudio.google.com/ https://gemini.google/overview/gemini-live/?hl=en https://gemini.google/overview/gemini-in-chrome/ https://deepmind.google/models/imagen/ Chapters: 0:00 Overview 1:02 Pricing 3:38 Gemini Models 6:07 Deep Research & Canvas 11:18 Flow (Veo3) 17:07 Live Translation & AI Search 20:13 AI Try On 21:22 Google Labs #ai Free AI Resources: 🔑 Get My Free ChatGPT Templates: https://myaiadvantage.com/newsletter 🌟 Receive Tailored AI Prompts + Workflows: https://v82nacfupwr.typeform.com/to/cINgYlm0 👑 Explore Curated AI Tool Rankings: https://community.myaiadvantage.com/c/ai-app-ranking/ 💼 AI Advantage LinkedIn: https://www.linkedin.com/company/the-ai-advantage 🧑‍💻 Igor's Personal LinkedIn: https://www.linkedin.com/in/igorpogany/ 🐦 Twitter: https://x.com/IgorPogany 📸 Instagram: https://www.instagram.com/ai.advantage/ Premium Options: 🎓 Join the AI Advantage Courses + Community: https://myaiadvantage.com/community 🛒 Discover Work Focused Presets in the Shop: https://shop.myaiadvantage.com/

Оглавление (8 сегментов)

  1. 0:00 Overview 196 сл.
  2. 1:02 Pricing 543 сл.
  3. 3:38 Gemini Models 466 сл.
  4. 6:07 Deep Research & Canvas 921 сл.
  5. 11:18 Flow (Veo3) 1233 сл.
  6. 17:07 Live Translation & AI Search 708 сл.
  7. 20:13 AI Try On 267 сл.
  8. 21:22 Google Labs 1442 сл.
0:00

Overview

Okay. Wow, what an incredible day. Google just held their IO conference releasing the biggest batch of AI features, products, upgrades, apps, everything. Their entire ecosystem transforms. Literally, Google search has a new tab with an AI mode now. So, I wanted to really make this video a sort of a first look at everything. Now, if you want a comprehensive look of every single little feature added, check out the keynote. I think it's actually really worth watching. But here we're going to be looking at the most significant parts. I upgraded to the new $250 plan, which is mad. But yeah, we're going to be trying out the things that are available now. First look, an overview of everything that's going on here. So, let's have a look at everything that Google is offering now with their new ultra plan and across their entire ecosystem and everything that changed with this massive AI update. Spoiler, it's actually incredible. Ocean, it's a force, a wild untamed might. This is probably the biggest batch of AI releases we ever got in one shot and there's a lot to check out here. So, let's get into it.
1:02

Pricing

Starting out with the plans because I think this is what is causing a lot of discussions online. They have new AI plans. So, these are for their Gemini products, but also some other products. They really span across their suite of products. So, I'm going to try and point out with every single thing we look at here how it's priced, if it's freely available, if you need the pro or the ultra plan. I basically spent the past what is it like five hours since release on calls with team members. We tried everything out. I have little demos here for you, but it breaks down to this. A lot of it is freely available. That's amazing. Like if you just want to get your hands on some of these things, you can. And after this video, you're going to know exactly where to begin. All the links are going to be in the description below, too. But the original Gemini plan that was $20 a month now transformed to the Google AI Pro plan. Okay? and they introduced a new one called the Google AI Ultra Plan at $250 a month. Now, I don't want to spend too much time talking about this, but I basically spent an hour trying to upgrade to this. As a European, it's really hard. You need to use a VPN, but then it still doesn't work. And I just want to point out that it is $250. We couldn't even activate even with a US account this offer. Point being, if you're in the US, it's not going to be a problem. Eventually, it's going to come to Europe. But if you decide to upgrade to this ultra plan, you're going to have access to all of these things here. Some of them are still rolling out. We're going to talk about that as we go. But as you can see, there is really a lot here. The updates are immense. Every single product that mattered got upgrades, updates, new features, including Gemini 2. 5 Pro, including their video generation models, image generation models, their sound generation models. day now have a computer agent that remote controls your computer and you can train it on custom data. They updated search. Let's get into this. Okay. And all of this will go step by step what is included in what plan. But just know not all of this is free understandably. So I mean some of this stuff is crazy and yeah it's just not for everyone. Some of these features here you have a more detailed breakdown of for example what's in Gemini Ultra but that's what we're going to be doing in this video. So let's get straight into it. I have a bunch of tabs here open and I want to start my journey with really two things that stand out for me. Not to say that they're the most important ones, but they're like the most impressive ones immediately right off the bat. The first one is the new Flow product, which is basically uh video generation studio video audio together. So, you can generate clips like this one in one shot. Okay, audio and video together at a new quality
3:38

Gemini Models

level. And the second one is all their incredible updates to Gemini Studio that I have actually right here right now on this ultra plan. So there's a few things here. I'll start with the ones that we'll cover later in the video. That's V3 generation in here and Image and 4 generation. We'll talk about this later on. These are the image and video generators. Now I really want to focus on what's new here right now. For me it really comes down to two super interesting things here. One is the deep research and canvas update. Both of these got significantly better. And the second one is the update to their mobile apps both on Android and iOS that ship today. Now you have a chat GPT like voice assistant in there that you can talk to. But mainly I really want to focus on the features that I actually use day-to-day and that I know a lot of other people use day-to-day which is deep research and chat GPT and the canvas feature which is the writing/coding interface that really helps bring the things that Gemini creates to life. Doesn't just write code it displays it inside of canvas and that got way better. Also, the deep research is now powered by the new updated Gemini 2. 5 Pro thinking that has been launched a few weeks ago and the whole thing has a lot more context. So, enough of the talking. Um, let's do one more thing before we get into these comparisons, which would be checking out the benchmarks over here. We'll keep this one brief, but they're impressive in short. Okay, so this is the new Gemini 2. 5 Pro Deep Think model that is not available today. This is coming soon but this is essentially their equivalent to the pro model of 03. So right now in ge chatpt um to make this as clear as possible we have 03 available for you the most intelligent model openi has and rumors say soonish they will res release a 03 pro that uses more compute thinks harder and therefore it's way smarter. Well, Google announced just that. It's just not available right now with Gemini 2. 5 Pro Deep Think crushes all the benchmarks as you can see right here. Something to look forward to. And then beyond that, they also released this Flash model which we can actually access today that actually gets very close to some of the best models out there at a fraction of the price. Enough theory. Let's look at these apps in action because this is some seriously impressive stuff. And what we're going to be doing is looking at one example of a deep research and one example of uh the canvas in
6:07

Deep Research & Canvas

practice. And we're going to be comparing these side by side. I'm just going to pull this over like so. Over here we have Gemini with the first deep research and chat GPT with the same deep research. Okay. So to begin with I want to point out that the official announcement regarding deep research is that now it can include documents which is big. You can upload a PDF, you can upload research papers, you can upload multiple PDFs, apparently using the new context length, the super long context, the 3,000 pages of context you can use with this ultra plan now in combination with deep research. But to me, it also feels like the product changed a little bit. Like I can prove that, but I've only ran a few prompts and this one in particular really surprised me. Succinctly put, this is the first deep research product that I will be comparing to OpenAI's deep researchers. Matter of fact, on this one, I even prefer Gemini. I'm going to keep this super brief because these are approximately 12,000word long reports, but the Gemini one in this case was just more specific to my query. I asked for a recovery plan on tenderitis aka tennis elbow that I have from kite surfing learning loops. Now the one here I've read many times the one in chachi but the one in Gemini I just got to say it's so specific and all of the healing exercises things like specific to kite surfing. That's what I mean. Each one of these points is a very very kitespecific technique and component and it just addresses my problem through the lens of exactly what I asked not just tenderitis in general. Now I have to give Chity still some credit here because on the alternative healing methods it was actually better in this report but overall just picking one of these two the new Gemini Deep Research takes this home. Now again they only announced that you can upload new files to it now. So if I enable deep research over here, I could now add files um like so I could add different PDFs etc and then run the deep research with that which was not the case before in Gemini in chatbt I can totally do that. I can add files here and run the deep research on top of that. So that is really impressive. I love running deep researches. It's one of my main use cases for AI still because then you can take them and use it as context for other searches, other flows. Next up, Canvas. Okay, canvas is the interactive feature that displays code and allows you to write. It's like a word editor and a code editor that also can visualize the stuff and they made major updates to this. I'm just going to show you what it did. Okay, so I said, "Build me a visual web app that simulates the solar system, but each planet is a different tab. " Then I ran the same prompt through chat GPT over here to build a solar system um like that and it did it. Okay. So let's have a look at the chat GPT one over here. I'm going to say preview and then let's have a look at the Gemini one which to be fair I think I had to follow up once to say make it interactive and 3D because the first version was a little lacking. Look at that. This is not 3D. This is not what I had in mind. So I had to reprompt on Gemini. So that's worth something. As I followed up, it made this though. So have a look. On the left we have Gemini. On the right we have Chat GPT. Okay. Chat GPT version. Um well not really 3D, not really manipulatable. I can go through these different tabs. This is quite basic. Gemini. Look at that. And then you can pick a specific one. It zooms in and gives you the look lighting everything. Just wow. Right. the whole thing. We can go to the sun. We can zoom out like this is incredible. Well, this is pretty basic. Okay, fair enough. I followed up with Gemini once. So, let's give a fair shot to Chat GPT and follow up one time, too. What I did is I took the specific implementation details that Gemini gave me and then I copied them over here into a message, followed up, boom, clicked it, and it created this. It fought for 37 seconds. Let's see. This is my first time looking at it actually. And let's have a look. Um, yeah. Okay. Is it maybe uh ah okay it is connected to the fact that I was only using half the screen there or isn't it? Okay. So yeah this just doesn't work in this example. Again this is just one example but also those say a lot. So there you go. Um canvas much improved again on this one simple example. seems to be beating Chachi PT. Again, this is a first look, not a definitive kind of review of everything. But Gemini stepping up big time and pulling ahead in the core features that I use every single day and that's really what matters to me. Plus with the new Deep Think bringing beating out everything that anybody else has done. Impressive. Okay, image and video. Let's have a look. We've seen music
11:18

Flow (Veo3)

generation models before. We have a very colorful landscape there with like Sunno, Yudio, Lirya by Google was um introduced recently only as a research preview but they only generate audio. Then we have video generation models like VO and all the Chinese models and like cling and then we have runway that was always used to lead the pack and then many players there. Then we have the image generation models, right? With uh Google's Image Genen 3 that used to also be one of the best. Then GPD 40 image generation came and kind of dethroned that, but they updated all of that. Okay. And not just that they updated that, the new VO, the VO3 now also does audio. So I'm going to put on my headphones now and like listen closely cuz this is a first. You can just put in a prompt and there's many improved things about the model like improved physics and realism and things like that, but mainly just check out this little video of a cat with a hat typing. This came straight out of V3. That's just great. That's just perfect in terms of sound design. The sound effects are spot-on. There's ambience behind it and even the video movement with the slight dolly in. Excellent. Okay, many more examples here. I actually played around a bunch. I can show you a few more here. So, that one is with music or this one is a little weird, but still good. Really impressive stuff here with the audio. I mean, okay, look, there's three arms. It's not perfect, right? But it's way better than the ones before. And this is the first one that does the sounds. Okay, check this one out. I really like this one. The sounds just brings it to life, doesn't it? Okay, one more and then we'll round out this little segment. I mean, it generates music there for me. You have the crowd. Do you hear the crowds there? The crowd looks realistic. The cat. This looks like something out of a animated movie. It's kind of perfect. I prompted for a cat with a hat with a monokal uh DJing at a party or something. And this is not it. Okay, that's the new model that we covered. I'm not even going to go really into the image model here because I think this is the big news here. Yes, they have a new image model. It has better prompt adherence when it comes to text, but I think, you know, that's sort of like on par with GPT40 now from what it seems. But this video model just blows everything else in the market out of the water because they also added what previously used to be my personal favorite interface to work with AI video in the web, which was Sora. It's not the best model, but I thought the interface was really good. You could edit things together really easily, extend stuff like that. Well, they have that, too. They have the scene builder. You can essentially take multiple clips and you can add them to a scene. So, I think I could just click this. Yeah, there you go. There's one scene and then I could take another one. And you can kind of stitch these things together. You can extend, you can work within that interface and you can, I don't know, create a little monoal cat with hat dance party like so. And as you can see, you can trim right in here. You can resort, rearrange the things. It's very simplistic, but allows you to create a quick little sequence. There's one more thing that I wanted to show you. Okay, beside the scene builder, again, this is ultra plan only. By the way, if you're using this, you can use text to video. So, you can say a cat with a hat and then it will, you know, generate that. As you're doing that, there's these settings that you need to pay attention to because by default, these are usually not set to the high setting and they're on the old model. Okay, so you need to switch to highest quality and then it's going to generate two of them at the same time. So, here I'm going to do another one with actually the highest settings. Then you can pick between uh two scenes when it generates that in here. Rearrange, do things like that. But I think the key feature here is this inside of this new flow application that they shipped here. Okay, the key feature here is you can actually pick different ingredients. Now, this is not fully functional with the new models yet. They just shipped this base version. They're going to update it over time, but you can essentially use the new imagin to generate images like this or like this, whatever you want to do. And then you can use them as reference images. Okay? And then you can mix and match these different reference images. So you could create a image like this of I don't know this beautiful Garden of Eden type scene and then you can pick a character that you have over here and then you can prompt on top of it and then it uses those things in combination to generate videos. Now this does not work well with the V3 yet. I tested it a bunch. You can see from my early results here some of these videos have been generated with this mixing. I suppose it work. But look, they're just not that good. There's actually one more I need to show you before we move on here. And I think that was the little break dancing one. Oh yeah. Okay. Full screen this one. What's even going on? I don't know. It is so much fun to play with. And let me tell you, like the audio adds another layer to it that really sells the whole thing. You know, sure it was fun to play with AI video, but with this, I don't know. It just feels different. It feels more immersive. And then you can build little sequences and you can mix and match different characters that you generate into scenes and you can turn them into videos with audio right away. I mean I'm doing cats with hats, but you could do anything. You could generate images of humans, characters, whatever you want to do and make things like this. So this is the new flow. Okay. And they have some great video demos there how artists and filmmakers are using this to create like extra footage for what they're doing. They had an app called Whisk before this. I thought that was the best interface to generate images especially for newcomers. that was exactly like this, but just for images. You could pick ingredients, pick styles here. That's still grayed out here. But you can mix and match things and use all of these generative tools in a super simple interface to actually create unique things with audio tool. Now, as you can see, here's the results. See, it does it all. It comes up with the music for it. sounds. Okay, so that's amazing. So, that kind of covers the image gen and the video gen over here.
17:07

Live Translation & AI Search

Let's move on because there's more. Okay, for example, this one in Google Meets. This hasn't shipped yet, but like this is a mind-blow cuz it's happening. They're going to have live translation. I think they're beginning with just Spanish and English, but they're going to have a live translation. There's also demos of this in the keynote between speakers. So, you can talk to somebody who's Spanish speaking and it's going to live translate back and forth as you can see here in the top right. You will be able to enable this and yeah, video conferencing will be international. And you can imagine we'll ship to the phone too. So, you're just going to be able to use live translators here, live translators there. It sort of works in chatbt, but it wasn't that smooth. It wasn't really a sticky thing. I didn't like using it that much. This from the demos looked incredible. We'll still have to wait for this to ship everywhere. It's supposed to be available, but don't have it yet. Then they modified Google search. So, if I go ahead and look for, I don't know, Google AI premium cost for example. Okay, I could do that. You could do the classic search. Then you have at the top here you have the AI enhanced searched uh which can't generate an overview right now for some reason. But like here you have this little box that kind of enhances the search that you might be familiar with. They've been exploring this over the last few months and then you have the classic Google results. Um you know starting with ads usually but there's a new tab. This has been an early access before. I had access to this cuz I signed up to the weight list and I had this but this is sort of like a chatgpt search/perplexity built straight into the native Google search experience. All you need to do is you need to tab over just like you would tab over to images to see images during a typical Google search. So I tab over and all of a sudden I'm not getting a bunch of ads anymore. I'm not getting hundreds of blue links that you know rank for who is best at optimizing for SEO. I am getting just the results. There's also links. And the interesting thing here is if you watch their demos of this and watch more on this, this whole interface is very fluid and it changes depending on what you search for. So you could imagine if you look for stocks, this is not there yet, it can pull up analysis tools and it can do things that Gemini would also do. So it could like run Python code and do calculations for you, create custom graphs, things like that. You could imagine like with the new Gemini models that are really good at building applications and the new diffusion model also that they showed off, it could build an application in a second. In the future, it will build a custom app for you and it will deliver that to you as your search result. and all you'll need is some sort of like expensive, you know, AI subscription, but then you will have these experiences. You will look for something and rather than it giving you links to places that might solve your issue, it can just build the app to solve it. Again, that's like talking about the future, but right now we have this right here and it's already fluid and modular and I think this is amazing and you can enable it. So, if you don't see this, you can go up here to this little search labs and then here you can enable it. I enabled it for myself already. this AI mode. Try AI mode. And then here you can activate it. So, this is impressive. Some of these things are really revolutionary, like they're cannibalizing their own product. Okay, interesting. Let's keep an eye on it. But there's more. Okay, some of these I'm going to breeze over cuz they're not available yet. These are supposed to be available soon. But this is an
20:13

AI Try On

interesting one. This new Google AI search mode is also going to include shopping obviously, right? They're going to be offering products right in there. Fluid interface. Maybe you can just like your agent can just buy the product right away. And they have this new thing that takes an image of you and then fits the clothes onto your body. Again, more detail in the keynote and everything, but they kind of like create a 3D model of your body, as I understand it, and then they fit the clothes onto you, and you can try them on right through the web because you just click try on, you pick an image of yourself, and then you have it. I mean, let's just take a second to appreciate this. Honestly, these things, we've seen demos of this in the news, you can use episodes I do every Friday on this channel. We looked at these applications, these hugging face demos that sort of work. Google wasn't in the game. All of a sudden, they're shipping all of these things that the competition has been kind of cooking up. Small startups, small labs here and there have been making these things and they just make seemingly production ready versions of it, 20 of them, and present them in one event. That's really what happened. So, this is really interesting. And there's also going to be like auto buy features. Yeah, I also wanted to point that out. So, I'm curious to see when this is actually going to be available, but there's more. This one really quick.
21:22

Google Labs

This is essentially a competitor to OpenAI's Codeex, which we haven't talked about in depth on the channel yet, but essentially Codeex, it's a developer product. Connects to your GitHub and then you can pick a branch and it you can spawn like 10 agents, like 10 junior coders or actually senior coders cuz it's that damn good. And then it works on the different tasks you give it. Jules is sort of their competition to this. I don't think this will well time will show. I will not judge this yet, but it looks like, you know, exact competitor to that. The reviews of Codeex in the developer community have been absolutely insane though. Everybody's just blown away by how good that is. This is their competitor to that. So they just kind of dropped it as a side note like hey we have Jules here that is an AI agent that integrates with GitHub and then can asynchronously code for you. So that's a developer focused thing that they released. Then I have project mariner. This is a thing that is actually available right now. I haven't managed to get it to work on my machine. the team already is trying it out, so we'll follow up on this. But it's essentially like Open Eye Operator, which for anybody who doesn't know is like a computer use agent. Basically using mouse and keyboard to operate your computer to get things done. The most interesting thing to me here is one feature that I've been missing from all of these computer use agents that tested so far. And we've tested them all. We actually have a set of test cases, test prompts that we run on them. They're not reliable. They're not good. Even operator like it loses the cookies after a while. It's only trained on these few partner sites where it works reliably like Airbnb and Door Dash and these sites. But beyond that, it's really not reliable enough to be a product. And the one thing that I've always wished for is why don't they let me train it? do the thing manually? Look at my screen. Heck, I'll do the thing manually five times and just take my behavior and then replicate that the consecutive 20 times that I would be doing the task. Well, Project Mariner has this feature, teach a task. So you can actually teach it on specific tasks. Now this is just getting started. It's coming as form of a Chrome extension and this one also is only under the ultra plan. Okay. So this is the $250 plan. I mean obviously you won't be getting computer use agents for a low price here. But as you can see project mariner is on this side right here. Here early access. This one is just getting started but the application the extension is out now. So if you're in the US you can download this and test this right now. I'll follow up on Friday with more practical use of this, but it looks super impressive. Few more things here also for the AI pro plan. By the way, just to follow up, this was the ultra plan, right? As I talked about this flow app, this is available for everybody. Just to make the pricing clear, this AI mode in Google should be available for everybody for free. This will be I think available for free. Don't quote me on that though. Jewels is I think you pay per use here and Mariner is in the ultra plan the $250 plan, right? And then we get to this. Okay, this is worth a dedicated video honestly going through each and every one of these. I loved featuring Google Labs products over the past few months already. Now they actually added so much. So Jules is in here as one of them. Another one is the synth ID detector that analyzes AI generated footage and tells you if it's AI generated. They said they did it already on 10 million images or something and it's just this new tech that analyzes if something is AI generated. So you can basically upload things to it and try it out. And then there's another 20 things like 15 of those are new. One of them is project marina flow we talked about also but then there's so many more. I would just recommend you just check this out. There's really fun ones that have been out for a while like gen type where you can just like type in a phrase and it generates it in like a custom font that it creates for you. That's really cool. But that's been around for a while. a bunch of experimental stuff like Project Astra 2 that's now sort of like built into the mobile app as I talked about with the video features. Notebook LM has a desktop app now too that we'll talk about in Friday's video and have a look at because notebook LM is great. So Marina isn't but most of this is free. So if you're looking for free stuff to try just go to labs. google/experiments and you can try some of these experiments some of these things that are available in here. Especially here at the bottom we had a lot of fun with genes. can kind of generate custom chess pieces. If you're into chess, check this out. Also free. A lot of these things are just immediately available in Google AI Studio, their developer interface. So again, Gemini right here is their consumer interface and Google AI Studio is their developer interface. They're kind of unifying everything. Actually, they're kind of doing well on the naming and everything. I like it a lot. Here they're supposed to have the new imaging model, which is not available yet. I think as of now, nothing really new is available in here yet, except of the new Gemini 2. 5 Pro model that has been out since two weeks though. This new flash that released today is available here immediately. Okay, so that's Gemini Studio. And then let's round out the video with the last few points. This is the live feature that I talked about. That's the interface where it's the voice assistant, but you can also use video to interface with it. Wonderful thing. Then we have Gemini Chrome. This is also going to be available to everybody soon. This is not out yet, but they're basically altering the Chrome browser. As I said, like they're redoing their entire product lineup with AI. Now, the Chrome browser is going to have a AI button at the top. And a lot of these extensions you might have seen over the past years, well, it's going to cannibalize those, right? So, like you're going to be able to click the extension and like have an AI assistant on every site. I imagine this would also work on like YouTube videos and all across the web. You could kind of just use Gemini and ask Gemini right there and chat with the website. So, that's coming. And then this is a page on imagen if you care about the new image model which is more realistic more details but mainly the main update is it does text really well which catches it up to GPT40 and then it's just this product page with an overview of everything. Oh yeah and then they have a small model with 4 billion parameters for all you technical nerds out there. 4 billion parameter model that is almost on par with some of the top models out there. That's just an insane one too. There is so much to talk about here. I think this does it for the initial overview. So, I don't need to recap. You've seen it all. I do recommend you check out the Google IO keynote. And on Friday's news you can use, I'm going to follow up with some of these stories. We're going to have a look at Project Mariner. We're hopefully going to get access to Gemini Ultra and the new Deep Research Compare that. And until then, follow us on social media to kind of check out more live updates. If you enjoyed this, don't forget to leave a like and subscribe for the Friday videos. And yeah, that's all I got today. Frankly, world changing releases. And all I'll say now is let's see what OpenAI does to answer this because um they got to make a big move to keep up. Good job, Google. I'm impressed.

Ещё от The AI Advantage

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться