This week, I'll show you why Claude just became the most useful AI for most people when it comes to work thanks to two huge upgrades. Plus I'll share with you some prompts that make editing in Nano Banana much easier and more consistent, show you how to use the new features in NotebookLM and Gemini Canvas, and break down the big news stories of the week like OpenAI's paper on hallucinations. Enjoy!
Free AI Resources:
🔑 Free ChatGPT Prompt Templates: https://bit.ly/newsletter-aia
🌟 Tailored AI Prompts & Workflows: https://bit.ly/find-your-resource
Go Deeper with AI:
🎓 Join the AI Advantage Community: https://bit.ly/community-aia
🛒 Shop Work-Focused Presets: https://bit.ly/AIAshop
Links:
https://www.anthropic.com/news/create-files
https://x.com/GeminiApp/status/1965475292526551105
https://x.com/GoogleAI/status/1965528770292367677
https://x.com/NotebookLM/status/1965106170152013888
https://x.com/GoogleAI/status/1965528762486772204
https://x.com/claudeai/status/1965129505913356794
https://www.apple.com/pt/shop/buy-airpods/airpods-pro-3
https://x.com/alterego_io/status/1965113585299849535
https://x.com/ideogram_ai/status/1963648390530830387
https://x.com/midjourney/status/1963753534626902316
https://x.com/claudeai/status/1965429261617266997
https://x.com/tripoai/status/1965078006894133607
https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf
https://x.com/runwayml/status/1965402058896662762
Chapters:
00:00 What's New?
00:34 Claude Files Upgrade
05:03 Gemini Canvas Select and Ask
06:39 NotebookLM Updates
08:04 Nano Banana Use Cases
09:46 Standard vs. Advanced Voice Mode
11:01 Why Do LLMs Hallucinate?
11:42 AirPods AI
12:11 Claude Mobile Integrations
12:37 Alterego AI Telepathy
13:15 3D Model Generator
13:31 Midjourney & Ideogram Styles
13:58 Edit Reality with Runway
Connect with Me:
💼 AI Advantage on LinkedIn: https://bit.ly/AIAonLinkedIn
🧑💻 Igor Pogany on LinkedIn: https://bit.ly/IgorLinkedIn
🐦Twitter/X: https://bit.ly/AIAonTwitter
📸 Instagram: https://bit.ly/AIAinsta
#aiadvantage #ai
Ladies and gentlemen, so it's been another week in AI and I think the lead story of this week's episode of News You Can Use is actually something that is useful to almost anybody watching this video. I mean, if you've ever worked with Excel sheet or created a PowerPoint, you will want to hear about the new clot updates, but there's also notebook element updates and nanobanana use cases and a bunch of quick stories that are super out there. We'll explore all that and more in this week's episode of AI News You can use the show that rounds up all of this week's AI releases, filters for the ones that matter. and I have the pleasure of presenting them back to you. So, with that being said, let's dive into the very first story of this week's show.
Okay, so for the main story this week, we have something really exciting because as you know, large language models are extremely good at transforming one type of thing into another type of thing. But the one thing it hasn't been able to consistently produce is some of the files that are most common in a professional work environment. Excel sheets, PowerPoint presentations, or word documents. Now claude with this new update that is live for everybody now went after this output specifically. So when I log into my claude account, you will see that it introduced these new file types. Before we use this, just go into your settings and then under features, make sure you have upgraded file creation and analysis enabled for this to work. Let's see what we can do here. So now in theory, I should be able to create Excel sheets or PowerPoint presentations that go beyond anything that has been possible up until now with these tools. And let me tell you from everything that I've heard from both my team and the internet, the execution on this stands head and shoulders above everything that we have seen. But this show wouldn't be what it is if we wouldn't actually put the claims of the internet to the test. So what I'm going to do in this case is I'm going to open up a chat window and compare it with a cloud generation. And I'm just going to grab some data from the past 28 days of the AI advantage YouTube channel here and upload the CSV both into claude and chat GPT. And I'm going to take a prompt that is very similar to what Enthropic did with their launch video here. So I just tell it that I attached YouTube analytics. Give it a little bit of context on my channel. And then I ask for the top three to four opportunities with clear visuals presented in Excel file. Amazing. Just going to run this same thing in GPT with the auto model router. A few moments later. — Whoa, whoa. Okay, so first impression is that I've seen many of these files, but nothing comes even close to what I'm seeing right here in front of me. Okay, but like let's give this a comprehensive view. So there's five tabs within this Excel file. And you know what? I'll actually download this to compare it as our Lord intended within Excel itself. Let's have a look inside Excel. So look at that. One, two, three, four, five. Damn. So this is the raw data that we gave it, right? Then it has a recommended content calendar for the next 30 days. How to use cloth for beginners ch. Oh my god, it wants me to make all of these short tutorials. Okay, interesting. Free eye tools every This is actually really good. And like I want to highlight the formatting here. Look at how beautiful these are. I can tell you if I were creating a sheet, it wouldn't look this good. Not saying it's perfect. Yeah, just saying this looks really good. Okay, so it analyzed the different components. Honestly, this is just a fantastic way of looking at all the data. Way better than this right here. Wouldn't you agree? And then here's the executive summary. Wow. I guess I'm gonna be asking for all of my results as a Excel file now. Okay. Not bad. But now let's make the comparison, right? Because if we do the same thing within chat GPT, it gave us Excel file. And when we open that, we get this right here. Okay. So this is just the raw data right here. Top of videos. Okay. It kind of just sorts it. Opportunities. Oh, there's one. Okay. So, as you can empty chart. see, this is not even close. I figured it wouldn't be, but I didn't think the gap would be this large. Now, I'm sure OpenAI is going to catch up, but as of now, claude. Wow, not bad. Okay, I want to try one more thing because this is supposed to be really good with PowerPoints, too. So, I'm just going to say turn it into a PowerPoint presentation. Enter. Wait, hold up. This even had a chart. I didn't see that. Yeah, right here. Not bad. This is impressive. Okay, so this took about what? Five minutes. Whoa, hold up. This is actually good. Okay, so YouTube channel analysis. My god, it's like a consulting pitch deck. And it's really clean. Are you serious? — And it's a completely different design than the other one. So — yeah. Yeah. With emojis, without emojis, strategic emoji usage. This is crazy good. I guess this is the only obvious mistake here, this placement. But come on, man. What can I say? Really impressive. And as you know, you could direct this, give it your brand's colors. This is serious competition for some of the specialized apps who do presentations like Gamma, etc. Anyway, my conclusion here is that this is incredibly good. And if you ever tried to make a sheet or a presentation or a word document before, and you concluded that LLMs are just not good at that, well, it's time to reconsider and just go ahead and try this for yourself, cuz this was my very first attempt, and the results that you get in a few minutes are very, very clean. Wow. If you watch the show, you know I don't say this very often, but I'll be using this feature all the time. Okay, let's move on. Okay
so for the next story, we're going to be looking at an upgrade that when I saw it, I just figured this has been long overdue in all VIP coding applications. It's a update to Gemini Canvas where you can create an application and then edit it through visually clicking into it instead of doing the whole screenshot and describing stuff. Let's just give it a shot. Okay, I'll just prompt it to create a simple Pomodoro timer app and then I can show you how this works. Okay, so there it is as you would expect. And now the innovation here is that they added this visual editing. So all you need to do is go down here and use this new feature called select and ask. So if I click this, I can just select this button, make this bold, and use a Japanese style gradient. I don't know. It's just what came to mind. Okay, so you know how we do it here. No editing. So it didn't do this button. It did the title. Let's give it one more try. A similar prompt. I'll just say that it should complement the color scheme. Let's see if it gets it right this time. Maybe it was because I just selected the middle and not the whole element. That's really giving it a lot of credit though. I hope it gets it right on this second try. Okay, so that actually worked perfectly. This is sort of what I had in mind. Although some of the other functionality doesn't work yet. We could fix that though. I just wanted to show you this feature because I think this is the direction that these things are heading in. I mean, let's be real. Vibe coding and taking screenshots and then describing which part of the screenshot you mean and how you want it edited is not optimal. And tools like this just make it simpler. If this worked flawlessly, it would make me question why not all VIP coding apps actually have this already. And I'm sure over time we'll get this across the other applications too. Nice update to the editing canvas. And as you saw, it might not always work flawlessly. Next
up, we have notebook LM shipping even more updates. If you follow the show, then you might have seen that across the past few weeks, we got updates and we also posted a great tutorial on the channel introducing you to Notebook LM and now they have a new reports feature with flashcards and quizzes rolling out slowly. So, let's just give this a brief look as we usually do on this channel. I'll just open up this cloth code workbook with a bunch of YouTube videos. And as you can see, if I go to reports, you'll get a bunch of new options here. You can do a custom report or one of these presets here. And what I really like is they added a AI suggestions feature. So down here it suggests what type of report would make sense here. So in the case of this clot code workbook, maybe a onboarding guide or a process explainer. That actually makes a lot of sense. I'm just going to generate one of those and see what we get there. Okay, so it took about 2 minutes and now we have our report right here. So this was the process explainer, right? And essentially what it did here is a beginner guide, a step-by-step guide. So it starts with step one. You need NodeJS, install CL code, etc. Okay, that looks really solid. Then crafting the prompt. Yeah, I mean this looks really solid and includes some of the tips. We're sell hosting Firebase for the back end here. So, this is fantastic. It basically produces a guide from all of these sources and give some advanced techniques in the end. Great stuff. I really like these reports, especially these suggested formats that it comes up with on the fly. One of the most powerful apps in the space becoming even more capable. So for the next story
we're going to be looking at some various use cases as recommended by Google by one of the most popular apps across the entire AI space recently, the image editing model, NanoBanana, also known as Google 2. 5 flash editing. I don't even know. It's Nano Banana. Okay. — And let's be real, for long-term followers of this channel, you might notice that the main example they did here, this cat with a hat, is something that yours truly right here uses every single time. Okay, I didn't come up with it, right? Like originally I think this is a Borat joke. — Is this a cat in a hat? — But Google is doing cats with hats too at this point in time. This is an industry standard way to show off image editing capabilities. Point here is they show a few use cases and one of them is adding and removing elements with this cat with a hat example. Here's the specific prompt for that. The link to this thread will be in the description. You can just get it from there. Next up, they show one that we featured in this week's newsletter, which is in painting, aka making selective edits to only a part of the image. And how to do that with this specific prompt right here. And then a few more. You can mix multiple images like a dress and a picture of a woman to get the woman in the dress. And then lastly, adding this logo to her t-shirt. So, if you're looking for inspiration or prompts for Nana Banana, there you go. There's four really powerful ones that you can take, customize, and use for yourself. And if you haven't given this a shot yet or you were living in a cave over the past few weeks, then this is your reminder to go and play with Nano Banana because it's unbelievable. It really raised the bar on what is expected from AI image generation models, especially when it comes to editing existing images. Like as you can see, it maintains the actual face of the person instead of morphing it just like chat would before. Okay, let's see
what's next. In other news, OpenAI is trying to depreciate the standard voice mode in chatbt and they're trying to replace him with the advanced voice mode that you might know. That's kind of the interactive voice and the blob that moves around. And even people within our community are pushing back on this and saying they like the standard voice mode. I wasn't even aware that this is a thing. And so many people share this opinion that OpenAI implemented a new feature that if you go to customize chat GPT and you go to advanced down here, you can disable advanced voice mode for yourself. So, if I do that and then use voice mode, you will see that this is not advanced mode. This one is just the old voice mode. There's no color. — Got it. Sounds like you're pointing out that the current voice mode experience. Probably in the chat GPT mobile app or similar isn't using the new advanced voice mode. — And then apparently people prefer this to this experience. Hey, so how's this different? — You got it. I'll keep it nice and straightforward. Basically, I'm just going to focus on giving you the kind of clear, nononsense advice. — Okay. Can I interrupt you? — Absolutely. To me, it feels like that the old voice mode is more static and the new one is a bit more flowy, but then to me, the quality of the old one seems a bit higher because it's not trying to have a fluid conversation. It kind of just talks and then you can pause it and let it talk if you prefer that. Now, there's an option to switch around. So
around. Okay, let's see what's next. So, now we get into the quick hit segment where we talk about various stories that are worth your attention, but maybe we don't need to speak about them for multiple minutes. Starting with a brand new paper on why language models hallucinate. You can certainly get into it. What I did is I just ran it through this summarizer prompt and I like the second summary the most. And it essentially explains how all language models are trained to sound very confident because they're training and most exams reward guessing over admitting certainty. And then the author of the paper also suggests that you could change how you score the exams so that in the training the model also gets credit for saying I don't know sometimes. As of now, just be aware of this and know that the models will confirm a lot of information just because that's the way they're trained.
Next up, we have something that you can't use immediately, but very soon the new AirPods Pro Free are available. And they announced a AI feature, live translation, which I'm extremely interested in. I'll be getting these. I'll be testing that out, reporting back to you on this channel, maybe even in this show. But essentially, they're going to be using the active noise cancellation to suppress what you hear and playing a live translation that is processed on your iPhone. All of the current AI translation apps are kind of bad. So, I can't wait to see what Apple does here. Also, in other news, Claude
rolled out this feature that allows you to connect to various service called permissions and you can give it access to your location, your calendar or reminders. This is something we find in chat GPT already in other ways. So, for example, in chat GPT by connecting to map provider, it seems to get your location. That's my understanding at least. Whereas here, you can set it up like so and ask for local restaurant recommendations. And then it has your location because you gave it that
permission. The next story is a very interesting product category. If you haven't seen this, this is called alter ego. And from the launch video, it looks like a device that reads your mind, but really when you look beyond the surface, you realize that you actually have to vocalize the things that it reads out loud. So you're making noises and moving your tongue. It's just so silent that it's not audible to the outside world and then the device picks it up. But still, this could be an interesting AI interface in certain situations where you're not allowed to speak. You could kind of just — Sorry, I can't hear you. — I don't know. Is that even audible? — No, I literally can't hear you. — Apparently, this device picks it up. — Oh, I can hear you now. — And should be coming soon. Interesting.
For the next story, it's just a follow-up to a model that we've seen before. It's called Tripo, and now they released 3. 0. It's from the folks over at Stable Diffusion. It's a text or image to 3D model. So, if you work in that space or play around with Blender, make sure to check it out. It looks really good. That's it. Then we also
have some updates in the image generation arena. So Midjourney introduced this new feature that they call style explorer where you can explore various styles on this page and then if you like one of them there's a button that says try styles and then you can reapply to your own images. Similarly but different ideoggram now also introduced styles. This app is very popular with graphic design related tasks. So if you work in that space and you want to maintain consistent styles or you have a few styles that you manage for a brand, you can now do that really
easily. And then the last quick hit for this week is actually a use case that I really enjoyed. It's this video from Runway where they're showing how to use a feature that we introduced on the show a few weeks ago where you can edit videos to create these so-called warers where it's just one shot with a bunch of visual effects in the middle. And then here's a tutorial. Just shoot your scene and chop it up into 5-second exports. And then for each clip, you can apply new effects. So, for example, now that we're done with the content of this video, we could just splice this up and have this happen. I mean, editing team, I hope you came up with something fun for that. But other than that, I hope you found something that's useful or at the very least inspirative to you. And with that being said, my name is Igor and I hope you have a wonderful