These OpenAI Releases Are INSANE & More AI Use Cases
25:32

These OpenAI Releases Are INSANE & More AI Use Cases

The AI Advantage 03.10.2025 12 215 просмотров 371 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Warp is free to try, but for a limited time my friends at Warp are offering their Warp Pro plan for only $1! Use code IGOR to redeem here: https://go.warp.dev/ai-advantage In this video Igor showcases OpenAI's new Sora 2 video model, breaks down the new ChatGPT parental controls, tests the new AI features in Photoshop including Nano Banana editing, and more. Free AI Resources: 🔑 Free ChatGPT Prompt Templates: https://bit.ly/newsletter-aia 🌟 Tailored AI Prompts & Workflows: https://bit.ly/find-your-resource Go Deeper with AI: 🎓 Join the AI Advantage Community: https://bit.ly/community-aia 🛒 Shop Work-Focused Presets: https://bit.ly/AIAshop Links: https://openai.com/index/sora-2/ https://www.youtube.com/live/gzneGhpXwjU https://apps.apple.com/us/app/sora-by-openai/id6744034028 https://openai.com/index/introducing-chatgpt-pulse/ https://openai.com/index/introducing-parental-controls/ https://openai.com/index/buy-it-in-chatgpt/ https://cdn.openai.com/pdf/d5eb7428-c4e9-4a33-bd86-86dd4bcf12ce/GDPval.pdf https://lovable.dev/projects/bca94c88-804c-4345-a6a2-f7942d4f52f0 https://gemini.google.com/app/ https://x.com/GeminiApp/status/1972678638542766459 https://claude.ai/chat/149ea6b3-7e97-451b-b78d-f0ad6867a0bf https://variety.com/2025/film/news/ai-actress-tilly-norwood-talent-agents-zurich-summit-1236533454/ https://www.youtube.com/watch?v=3XvzGhvMKxs https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity Prompts: A professional, high-resolution, profile photo, maintaining the exact facial structure, identity, and key features of the person in the input image. The subject is framed from the chest up, with ample headroom and negative space above their head, ensuring the top of their head is not cropped. The person looks directly at the camera, and the subject's body is also directly facing the camera. They are styled for a professional photo studio shoot, wearing a smart casual blazer. The background is a solid '#141414' neutral studio. Shot from a high angle with bright and airy soft, diffused studio lighting, gently illuminating the face and creating a subtle catchlight in the eyes, conveying a sense of clarity. Captured on an 85mm f/1.8 lens with a shallow depth of field, exquisite focus on the eyes, and beautiful, soft bokeh. Observe crisp detail on the fabric texture of the blazer, individual strands of hair, and natural, realistic skin texture. The atmosphere exudes confidence, professionalism, and approachability. Clean and bright cinematic color grading with subtle warmth and balanced tones, ensuring a polished and contemporary feel. Chapters: 0:00 What’s New? 0:36 Sora 2 6:01 ChatGPT Pulse 6:48 ChatGPT Parental Controls 7:48 New Benchmark Measures AI Usefulness 10:17 ChatGPT Instant Checkout 10:54 Warp.dev 13:30 Anthropic Updates 19:07 Loveable Cloud 20:54 Pro Headshots with Nano Banaa 21:17 New AI Features in Photoshop 23:19 Hollywood Backlash Over AI Actress 23:50 Rise of 'AI Workslop' Connect with Me: 💼 AI Advantage on LinkedIn: https://bit.ly/AIAonLinkedIn 🧑‍💻 Igor Pogany on LinkedIn: https://bit.ly/IgorLinkedIn 🐦Twitter/X: https://bit.ly/AIAonTwitter 📸 Instagram: https://bit.ly/AIAinsta This video is sponsored by Warp. #aiadvantage #ai

Оглавление (13 сегментов)

  1. 0:00 What’s New? 146 сл.
  2. 0:36 Sora 2 1251 сл.
  3. 6:01 ChatGPT Pulse 190 сл.
  4. 6:48 ChatGPT Parental Controls 227 сл.
  5. 7:48 New Benchmark Measures AI Usefulness 540 сл.
  6. 10:17 ChatGPT Instant Checkout 148 сл.
  7. 10:54 Warp.dev 593 сл.
  8. 13:30 Anthropic Updates 1287 сл.
  9. 19:07 Loveable Cloud 426 сл.
  10. 20:54 Pro Headshots with Nano Banaa 91 сл.
  11. 21:17 New AI Features in Photoshop 499 сл.
  12. 23:19 Hollywood Backlash Over AI Actress 109 сл.
  13. 23:50 Rise of 'AI Workslop' 389 сл.
0:00

What’s New?

Welcome to another episode of news you can use in which we'll be looking at various stories and let me tell you OpenAI has been releasing things that we haven't seen before on mass this week. So that's kind of the main story of this week between Sora 2 and the brand new social media platform that goes along with it. Yes, that's a thing or the chat pulse release where it proactively sends you notifications or Google's AI image editing being integrated left and right. We have so many things to discuss and demo in this week's episode of News You Can Use, the show that rounds up all the AI releases, filters for the ones that matter, and then I get to present them back to you, including demos, comparisons, all the good stuff that we like to do on this channel. So, let's
0:36

Sora 2

begin by talking about Sora, cuz this is really the big story of the week. There's a few parts to this. I want to start by saying that this caused a massive wave on all social media channels. On Instagram, people are going crazy and showing off all sorts of memes and use cases as if this was the first good AI video model ever. Which first got me thinking, hey, this is not that big of a deal. Like, haven't you seen the Chinese models? Haven't you seen V3? But then you dig deeper and quickly realize that the unique selling proposition is not the fact that this is on par with some of the best models out there. I would even say the audio is arguably better than the one of V3. — Cannon bottle. That's cold. The unique selling proposition of this is the fact that they shipped it in a standalone application only on iOS for now, only in US. But that application acts sort of as a Tik Tok clone that only includes AI generated content with one standout feature. And that feature they call Cameo. It's essentially where you record your voice sample and a video sample of yourself and it creates an AI cameo aka avatar of yourself that then you or other users can use to generate anything with your permission of course. I mean this is sort of a privacy nightmare to be honest because like you're literally cloning yourself and making that available on a social media platform. But yeah, this is happening and not just that you can create things with your own cameo, you can also use other users cameos and use multiple have them interact and what they created here I think is really impressive and is a first in the AI space. But we've been hearing and talking about a AI powered social media platform from OpenAI for a while now and this is it. It's a AI native Tik Tok which understandably raises a lot of concerns but let's look at some of the examples here for you to get a bit of a feeling of what's going on. And ultimately, I would actually really recommend watching the live stream they did here. It's 20 minutes and it's probably the best live stream they have held yet in my opinion. It answers all the questions that I had and the demos are fantastic. It shows you the different functionalities. ability to remix existing videos with your likeness. It shows you the ability to add different object or remix pretty much everything on the feed with text prompts and your cameos. So, you can insert yourself into any video. You can dance battle with your friends, whatever it might be. I just think this is a really good interface and this is something that was already true for Sora 1. The one thing I mentioned on Sora launch was that hey the application actually has the most userfriendly interface. They really got that right but the model at that point was just not that special anymore. So as you can see from some of these examples they use Sam Alman's likeness with a custom prompt to generate these videos. And one thing that I want to highlight here which was not immediately obvious to me is that these videos are actually edited too. It doesn't just generate clips as you could up until now. It actually edits the thing. It cuts in. It cuts out. The sound remains consistent. There's music in the background that plays throughout it. There's sound effects. It's just like your experience on Instagram reels, YouTube shorts, or Tik Tok, but all created by AI. So, news flash, I actually managed to get access to Sora. The codes have been quite widely available. And I want to start off by actually giving you a code and proposing something. Look, it's first come, first serve. And if you get this code and use it, what you can do is you can go down here and say invite friends in the interface. and then you can leave a comment below or reply to another code and share one of your codes. This way, theoretically, if everybody does this, we should be able to get every viewer of the YouTube video into Sora. If you're feeling generous, you can copy two codes and we can keep this going. Anyway, now that I'm in, I'm able to generate videos and use all of the different cameos of the other people. So, right off the bat, I did what I always do. And I should point out the site is very laggy at this point in time. I think it's just getting a lot of usage. And then if I look at my first generation, of course it was, — "Hey, handsome, looking cozy in your new hat. You like it? " — A cat with a hat. And then I wanted to try the cameo feature. So I used Sam Alman's cameo and said, "Walking a cat with a hat. " And I got this. — This is her little hat. She actually likes it. See, she doesn't mind it at all. — And now here's the interesting thing. Whenever you create a new one, you have all these different cameos. And you could also switch to Sora 2 Pro. The difference between Sora 2 and Sora 2 Pro is subtle, but it does make a difference. — No, no. Dodge, you pixelated pretzel. Roll you medieval idiot. — He's still swinging for the play by swinging. Come on. Roll you tin can roll. Who designed this? He's winding up. I — see himorty. I see it. I was — It's a bit sharper. Audio sounds the same. But ultimately, the most interesting thing in here, I think, is the fact that you can create your own cameos. And this is something I unfortunately have not been able to do as of yet because you need a few things to access the cameos. First of all, you need a OpenAI account. And then you need to be using a VPN and an invite code to even use Sora. These are the free prerequisites. But even if you have that, in order to create your own Cameo, you need the mobile app. And in order to get the mobile app, you need to set your iPhone to the US location and have a US Apple ID. And I did some research and the best way to do that is to get an extra phone. So I will be doing that over the course of the next week. So for future videos, I can test all of these applications for you. And I guess next week, I'll be sharing my own Cameo with you. As of now, enjoy some of the invite codes. They're becoming quite widely spread. And then I just want to say that this feed is actually unbelievably entertaining, way more so than I initially expected because the restrictions are very low. You have all of this well-known IP like Pokemon and Spongebob and all of it just works super well. People are remixing it and my for you page is already becoming better than it was in the first minutes and it's just super entertaining. So there you go. If you can go try it for yourself and more next week, but for now let's
6:01

ChatGPT Pulse

look at some of the other OpenAI releases which are again firsts in their category. One of them being chat GPT pulse, a new feature that proactively sends you notifications. Now, actually turning this on if you're on a pro account. Again, this is just gated to the $200 plan is not obvious. It took me multiple minutes to actually find it, but it's only available on the mobile app. You have to go into your settings, then notifications, and there you can find the pulse updates, which you need to switch on. And then also, I should note on the personalization, there's this new pulse setting where you can use your memories with the pulse, which also requires this reference chat history to be enabled. And then if you have all of this on only then will pulse work. And then this took me a while to figure out. So I guess I'll have to wait for my first pulse because each day I sort of turned one of these things on and then I also had chaty notifications off. So I'll follow up next week with my first
6:48

ChatGPT Parental Controls

pulse. Next thing that OpenAI introduces and again this is a first. So this is really the theme of this week are parental controls. This is not something that's available inside of Enthropics Claude or Google's Gemini or any other AI model. I mean forget about the Chinese one having parental control. The only control those have is the government overlooking all the data. Little stab right there, but hey, I think that's fair enough. But yeah, this is coming to Chat GPT and parents will be able to limit the content that their children can generate with Chat GPT, including essential controls like removing sensitive content or even disabling things like image generation or saved memories. Eventually, the best version of these apps will be the unlocked one where adults can access the full range of thoughts of an adult human being. I like this base principle of everything that is legal within our society also being legal within AI tools. And obviously you need a separation between minors and adults. And this feature is available as of today for all church users. You just have to go under parental controls. Then you can add family members, invite them via email, and then set up the controls in here. If you're a mom or a dad, might not be the worst idea to do this. And
7:48

New Benchmark Measures AI Usefulness

then one more thing out of OpenAI and we'll just quickly brush over this but I thought it was extremely interesting because they introduced a new benchmark that evaluates different AI models on how much value they can create in GDP relevant tasks. In other words, on tasks that are actually performed in the real world and that create real value in society. And I always like reading these practical papers, but what really stood out to me is the fact that in their own paper, it clearly shows that Claude Opus 4. 1 performs way better than all the chat GPT models on real world tasks. Matter of fact, this is one of the big graphs in here. This red line is what would be rated as the performance of an industry expert. And you can clearly see Opus 4. 1 being the closest to that line. And it doesn't even include the new Sonnet 4. 5 release from this week that we'll talk about in a second here. And I also created a separate video on it this week. But as you can see, it really sets Enthropic apart here. And in the appendex of this paper, you can see the detailed breakdown. I mean, this paper is only looking at 1,320 tasks across 44 different occupations, but here you have a breakdown by sector. So for real estate and rentals and leasing tasks, Claude crushes it and is essentially almost at the human baseline. And I agree with this with these sales and transaction related tasks. Claude is so damn good. It sounds more human. The analysis is more useful. It's more concise and to the point. And for retail related tasks, you can see it even beats the human baseline. If it just comes to providing information, GPT5 high actually gets the closest. And if you want to learn more about the details, you can pause the video right here and have a closer look at some of these. So, as you can see, for industrial engineers, it doesn't get even close. But for shipping, receiving, and inventory clerks, multiple models perform better than the human baseline. Software developers is another one that ranks high. And again, this data set will not be perfect. Nevertheless, I think this is a good indicator to see in which lines of work AI is going to have the biggest impact in the close future versus jobs like film and video editors or audio and visual technicians only getting a little bit of help. Financial managers are another one whereas cloth gets really close and that's because it's so damn good at working with Excel sheets at this point I think. And also my last note on this is I think that makes up a lot of the data because the paper also shows that a bulk of these tasks is actually work inside of Excel sheets, PowerPoint presentations or PDFs which is Claude is exceptional at and that's why it scores so high. If it's pure text or other tasks actually GPT5 high goes toe-to-toe with claude in pure text, GPT5 being way better actually. So yeah, very interesting stuff. Hope you found this equally as interesting as I
10:17

ChatGPT Instant Checkout

did. And then super quickly, another new thing out of OpenAI. — This mother don't miss — is an instant checkout feature. So we had CHP shopping before where you could buy things from within Chacht, but it would redirect you to their site and you buy it there. Now starting with Etsy, there's a direct checkout feature within Chachet. It's US only and Etsy only for now. But for the very first time, you don't have to leave Chat GPT to make the transaction. And I guess if you play this out, all of your shopping would be done directly from within Chat GPT. And eventually AI would even make the decision on buying things cuz it would know your needs better than you do. That's far off in the future. This show is here to talk about what's possible today most of the times. Okay. So
10:54

Warp.dev

recently I've been using this new AI tool called Warp Code. It's very similar to a classic IDE which is the default code editing environment that developers use to build applications, but it's built with AI and agentic capabilities in mind from the ground up rather than slapping them on top of an existing platform. If you follow the channel closely, you might have seen a full tutorial showing you the workflow inside of Warp. And I recommend you check that video out if this interests you. But I wanted to quickly show off one more feature that didn't make it into that video, and that's using MCPS within Warp. Also, a huge thank you to Warp for sponsoring this video and making our productions possible. So, if you've been following the channel for a while, you'll be aware that I'm a huge fan of open protocols and especially MCPs, which if you were living in a cave and are still not aware of what that means, it's model context protocol and it's basically the standard way of how to connect tools to agents. And you can do exactly that inside of Warp Code. So, as you develop and as you build your application out, you can connect to an external data source with an MCP. Let me show you what that looks like in practice. All you need to do is head on over to your settings inside of Warp and then under AI, if you scroll down, you're going to see this MCP section here, manage MCP servers. And in here, I can simply click add. For example, for the context 7 API key, I did the following. I would enter this code, link my API key from my logged in Context 7 account. Now, if I start a new project, I can access all the documentation in the world through context 7 because it's connected already. And there it is. It's using the MCP tool to pull in the documentation. If you wonder how to set up MCP servers and you feel a little lost, well, I would actually recommend heading over to the warp documentation here and then copying this entire documentation into a brand new Chat GPT thread and then doing the same thing for any MCP server that you might be interested in like this context 7 MCP on GitHub. Same thing. You just add all of this context into the conversation and chat will guide you for exactly what you need to set this up. And that would be an easy way to access all the up-to-ate documentation in the world through the context 7 MCP. And there you go. Now, warp code can pull info directly from that source. And I don't have to constantly manage the context manually. And that's just one of many amazing features inside of Warp. If this segment interested you, make sure to check out the full video to see how the workflow looks like. There's just many little optimizations that add up and developers report that they save 5 hours per week just by using this app. And for a limited time only, they're giving away their premium pro plan for $5 per month. Head on over to the link at the top of the video description and use the code advantage to redeem that offer and get started with Warp today. All right, and now let's have a look at the next piece of AI news that you can use. Next up, I want to talk about all
13:30

Anthropic Updates

the Claude releases this week. And if you follow the channel closely, I created a separate 30 minute long video showing off what I think is the biggest feature in here, which is their browser extension. And I'm not going to go deep into that because that's what that video is for. I showed you like four or five different workflows, step by step, and results that you can achieve with this. It's been a few days and I still stand by this being the biggest innovation in AI for me and my work since deep research. I think it's that good. And I think the biggest thing that I would point out about it now that I didn't mention in the video is the fact that it can reliably execute the task that you give it. There's just less variance. Not saying it's perfect, it's just more consistent. But the new extension is only one of multiple releases, and I really didn't show you any of the others cuz I thought this one was so significant that I wanted to spend all the time on it. If you didn't see that video and you enjoy the show, I highly recommend you check the video out. It really shows the capability of this and where I think the AI space is going next, which is these browser agents that have the ability to fetch context for you rather than you really having to know what to give it and then manually providing it to. But there were other releases, too. Obviously, there was the 4. 5 model release which you know we can just quickly talk about and say that it's pretty much state-of-the-art at most benchmarks especially on coding and as I highlighted on these computer use tasks which are real world tasks that human actually perform with computers. It is head and shoulders above everything else. So it's a really great model but we hear that a lot these days. The kind of vibes on it as people would call it are very positive. A lot of people absolutely love this and I know that switched from clot code to codeex now switched back to clot code because clot code also got a bit of an update and this I also want to show you. So if I just open up clot code in my terminal you will see it has a new interface and this is essentially clot code 2. 0 cuz it has so many new features. I think most significantly they have now checkpoints built in. Before you needed to use GitHub to arrive at a previous version, but now it has it built in. And also the default model is Sonnet 4. 5. They don't have this opus for planning mode that I recommended in my what was it 70 80 minute long clawed code beginner tutorial. I guess the only thing that changed since then is you just keep this on default because Sonnet 4. 5 works so well. Then there's also this new context command which shows you the context usage visually which I really love because a part of cloud code that is important is not hitting this context unexpectedly because often it compacts things away that you might not want to compact away. So this is a welcome addition and additionally there's also this new feature. It's not really a release but it is something that has been really missing. Clot. ai/s settings/ usage and this allows you to track your cla account usage across all of the things that link to your account. So that would be their web interface, Claude. AI, it would be Claude Code, which is their CLI agent, and now even the new browser extension that remote controls your browser. All three of these are paid through your account and therefore account for these usage limits. And before you didn't really know where you're at, you kind of just got locked out at a certain point and had to wait for a few hours. Now you can finally track this here. Which leaves us with one more thing from this release that I really wanted to highlight here in this show. And that is this fun little interface that most people probably don't have access to because it's only accessible to the $200 max plan. And I don't think this really has a specific use case or I wouldn't even like recommend it, but I think it's a wonderful thing to demo in this show because this is sort of like a virtual computer interface wise. And you can really think of this as anthropics artifacts in this brand new more friendly interface with more sounds and more visuals. In case you're not familiar with artifacts, it's whenever you create a app within Claude, which is a chat competitor rather than just giving you the code. What an artifact is turning that code into an actually usable application right in your browser. So you can see it writes the code here and would open up over here is the artifact where I can actually press the buttons and see how this works rather than just getting the raw code. And rather than me describing what this is, I'll just show you. You can pick one of these presets or custom prompt and it's going to start building this interface step by step. And the big claim they make here is that none of these uses predefined code and it just in a very free form manner is going to build your application step by step. And as it builds it, it also shows it. So this is really more like a stream of consciousness from the AI to your little claude imagine desktop. And I suppose you could use this really well for some brainstorming workflows which I always love. And also them claiming that this is more free form and always uses different code kind of implies that artifacts and Claude uses certain boiler plates and certain presets which I wasn't fully aware of. But yeah, there you go. The choose your own adventure game. I'm not going to read all of this but I see you quit your job at big corp. You have some savings. You have an idea and then there's free notifications. former college roommate offers to invest. YC wants you and there's a potential co-founder. What's your first move? I say we just bootstrap and monetize here. And then you can see that it notices what I clicked and it develops the rest of the app afterwards. So it updates the runway, etc. And there you go. We have five paying customers with 2. 4K MR. And it gives me new choices. And it's building all of this as I'm moving through it while showing me the context. 29K out of 100K tokens used. And now we keep moving. So very similar to artifacts, but this is not sharable externally and it's way more free form. You can create any kind of interface for you and you can alter it as you go. Whereas artifacts sort of create something and then you can extend that. But really this has more of a consistent memory which is also using some of the new releases which include a memory function in the API. Now there's like a whole API SDK if you're a developer that you should look into. And yeah, there you go. That's everything new out of anthropic this week. Some really interesting and innovative stuff. Again, the big highlight for me being the browser extension that actually works and can be relied upon. Okay, so
19:07

Loveable Cloud

for the next story, we have something interesting. It's Lovable, which you might already know as one of the best no code builders to actually get started with, and they made a massive update. And the update essentially simplifies the process of using it for everyone. It's called Lovable Cloud, and it basically allows you to do certain things like integrating a database without actually creating a database and setting up a separate account. So up until now the way most of these no code builders worked is you build an app inside of them and that links to an external database that stores things like user login etc. Now they merge this into one thing. So we're going to give this a free shot and the wonderful thing here is it actually works on a free plan and this also includes AI integrations. So no need to link to your openi key. You can kind of just do things right in here. So I'm just going to do something simple like build a pomodoro timer app with authentication. And as you can see, if I launch this, there's no button here to connect to Sappa basease or something because all of that is built in. Now, seriously, this is quite a big step. And we've partnered with most of these companies that do these sort of things on this channel, as you might know. And this was a big tripwire for a lot of people coming into it like what is a SQL database? Why do I have to create this separate account? I just want to build app. And there it is. After a bit of time, we have our application. And you can see it has authentication right away. And as I create account and log in right away, I have this working Pomodoro timer in here. And I did not need to connect a supper account to store that user information. It just works. There's built-in analytics too now. And yeah, when I go to this cloud tab, I can see the database it created. We have profiles which are the users. We have the different Pomodoro timer session timer settings. You can see the user that I just created right here in this database view. And it's all in here. And that also includes all the AI connections. So no need to fetch API keys anymore. It's all sort of just included in your subscription here, which is really cool. And it makes it so much more beginner friendly. Next up
20:54

Pro Headshots with Nano Banaa

this will be quick, but it's just a technique on how to create corporate looking head shot with Nano Banana from any images you might have of yourself or others. And Google shared this themselves. It's this prompt right here that I'll put in the description of this video. You just put this into Gemini, upload that image of yourself, quick screenshot of me, paste it in here, and let's see what we get. That's kind of good. There you go. Prompt in the description, and have fun. Next up, we
21:17

New AI Features in Photoshop

have news in Photoshop, which I personally always love to see cuz as you might know, it was the very first software I learned back in high school and that was sort of the beginning of my software journey, which led me all the way to here 15 years later. But what they did now is they integrated some of the most popular AI image editing models into Photoshop. So, they're not just sticking to their own models, but really taking some of the state-of-the-art models like Nano Banana or Flux Context and integrating them here. So, logged in for my Adobe subscription. You can see right away that this is the feature we're talking about. You can use Nanabanana and Flux Context Pro in here and even use them with these features like harmonize. So, let's just give this a quick shot. So, let me just take this random thumbnail I had lying around. Sure. Scale it up a little. Duplicate it. Oh god, they changed some of the stuff around. And then what I'm going to do is replace this phone with something else. So, here I should be able to generate a fill. And then here, ah, here I can switch the model. So, I'm going to go to Nana Banana and I'm going to say change the phone into a microphone. And as you might know, if you haven't been living under a rock over the past few months, Nano Banana is just really good at these editing operations. You know what? Let's give it another try cuz this is okay, but not great. I guess it just didn't understand that. Yeah. Okay, this is what I was actually looking for. So, that's really good. Now, we can go ahead and switch the model and see what we in comparison get with the Adobe model. Let's give it the same edit. So, as you can see, this is not it. It's not just that the image quality of the mic is worse. It also kind of was way too creative here. I sort of like the idea of this lady singing at me while being at sea, but it didn't understand the prompt as equally as it didn't hear. So, we're going to do the same thing one more time, but with Flux Context Pro, and then you're going to have a nice little comparison of all three models in action here. Flux Context Pro. Yeah, this just didn't work. Nano Banana nailed this on a second try and that's why people consider it the best. And now with the power of this inside of Photoshop, it's a whole new application. Honestly, a lot of people say that these models sort of replace Photoshop, but I think knowing Photoshop and knowing these models, that's the real winning combination cuz you can do everything you could have done in Photoshop, but also use the power of AI and just combine them. And in this week's quick
23:19

Hollywood Backlash Over AI Actress

hits, there's really two stories I want to highlight. The first one is a follow-up to last week's story where the first artist was getting signed for millions of dollars for nothing but AI music with really good lyrics. And this week's follow-up story is this lady called Thy Norwood, which is the first AI actress. And it's sort of interesting to see how much backlash this story is getting versus the singer not being faced with the same sort of public scrutiny. Apparently, she's in negotiation with multiple Hollywood talent firms. And this is just a thing that's happening now. AI celebrities going mainstream. The other quick hit
23:50

Rise of 'AI Workslop'

that I want to highlight here, and I think this is a really good one to end on, is the so-called rise of AI generated work slop. And I watched this Atriarch video on it which puts it really well. And there's this really good quote on what that actually means. And I absolutely love this. So listen closely. This would be the definition of it. AI generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task. So it's something that looks good but isn't able to move a project forward. I see this all the time and this is the biggest risk with AI and I think it happens if you don't provide sufficient context. If you write up a report and you use AI to edit it, great. It's so helpful. But if you really think that you're going to come in with a blank canvas and in a short prompt you're going to create something that a human would work on for 3 days, all you're doing is pushing the work that you're supposed to be doing to another person. And whether you're a manager, an employee, a soloreneur, a CEO, this is something to become aware of in these days. People are using these tools, but that doesn't mean they're doing good work. It allows the lazy to masquerade as productive members of the team. And it gives them the ability to push off work to others. And we all collectively should become less tolerant of this and recognize some of these AI generations for what they are, which are different tools to enhance your work, but not to replace it. We're not there quite yet, so use everything you learned on this show wisely. All right, and that's pretty much everything we have for this week. I hope you found some of this interesting. I would be super curious to hear in the comments if you experience some of this AI work slop in your own workplace or company or frankly if you're guilty of this too. I know it's really easy to fall into this but some people just seem to be doing this all the time which yeah is a problem. Either way, my name is Igor and I hope you have a wonderful

Ещё от The AI Advantage

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться