ChatGPT Now Remembers EVERYTHING About You & More AI Use Cases
26:52

ChatGPT Now Remembers EVERYTHING About You & More AI Use Cases

The AI Advantage 11.04.2025 30 371 просмотров 988 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Get the AI tools you need to streamline your workflows today with the AI Productivity Stack from Hubspot! It's completely free, grab your copy today 👉 https://clickhubspot.com/24sq ChatGPT got a huge upgrade this week that lets it remember all of your conversations. Useful or creepy or both? Plus NotebookLM got a big upgrade, Midjourney v7 finally came out, and way more. You don't want to miss this one! AI Advantage Community Public Challenge - https://community.myaiadvantage.com/c/public-challenge/ My personal Instagram for travel updates - https://www.instagram.com/igorpogany/ Links: https://x.com/OpenAI/status/1910378768172212636 https://x.com/midjourney/status/1908012961840672947 https://x.com/genspark_ai/status/1907460298916921372 https://vapi.ai/ https://blog.google/technology/google-labs/notebooklm-discover-sources/ https://x.com/thestephenchau/status/1907452186109596018 https://x.com/GoogleDeepMind/status/1909270072444526809 https://x.com/higgsfield_ai/status/1906748655702265901 Chapters: 0:00 What’s New? 0:58 ChatGPT Memory Upgrade 5:35 HubSpot 7:01 Midjourney V.7 9:43 Genspark 11:29 Vapi AI 14:10 NotebookLM Update 15:40 Cove AI 18:33 AIA Public Challenge 19:31 Google Cloud Next 20:25 Project Astra 22:19 Higgsfield 25:05 Copilot Update #ai #chatgptmemory 🔑 Get My Free ChatGPT Templates: https://myaiadvantage.com/newsletter 🌟 Receive Tailored AI Prompts + Workflows: https://v82nacfupwr.typeform.com/to/cINgYlm0 👑 Explore Curated AI Tool Rankings: https://community.myaiadvantage.com/c/ai-app-ranking/ 💼 LinkedIn: https://www.linkedin.com/company/the-ai-advantage 🐦 Twitter: https://x.com/IgorPogany 📸 Instagram: https://www.instagram.com/ai.advantage/ Premium Options: 🎓 Join the AI Advantage Courses + Community: https://myaiadvantage.com/community 🛒 Discover Work Focused Presets in the Shop: https://shop.myaiadvantage.com/

Оглавление (13 сегментов)

  1. 0:00 What’s New? 252 сл.
  2. 0:58 ChatGPT Memory Upgrade 1088 сл.
  3. 5:35 HubSpot 337 сл.
  4. 7:01 Midjourney V.7 638 сл.
  5. 9:43 Genspark 459 сл.
  6. 11:29 Vapi AI 638 сл.
  7. 14:10 NotebookLM Update 372 сл.
  8. 15:40 Cove AI 749 сл.
  9. 18:33 AIA Public Challenge 258 сл.
  10. 19:31 Google Cloud Next 206 сл.
  11. 20:25 Project Astra 458 сл.
  12. 22:19 Higgsfield 689 сл.
  13. 25:05 Copilot Update 410 сл.
0:00

What’s New?

And this week in AI releases really has been packed with a ton of variety. A lot of things you will want to try out right away. And one mega significant update to the main app that most people are using, which is chat GPT. Plus, there's a new big feature, Notebook LM, apps that use novel interfaces to work with AI, a video generator that does human anatomy and camera movement super well. Upgrades to Copilot, Midjourney V7, and more. This week's episode of AI news you can use is going to be a really good one. For anybody who's new here, we're looking at all the AI releases from this week that you can put to work today. Or I guess the ones that are supposed to work today. That's foreshadowing one of the stories of this week. I'm coming at you from Tokyo, Japan. As you can see, I'm on the road. What a gorgeous country. By the way, I know that's not the topic of this video, but it's my first time out here. Kind of extended a US work trip to a Japan personal trip. And as somebody who appreciates a dedication to detail and craftsmanship, this country does so many things right, and people are super kind. Anyway, if you care to see some stories from the trip, I'm sharing those on my personal Instagram. Oh, one more thing. A big thank you to HubSpot for sponsoring this video, but more on that later on. But
0:58

ChatGPT Memory Upgrade

now, let's get into the first release of this week, which is chat GPT memories. They completely changed how they work now, and I really think everybody needs to be aware of this. Memory and chat can now reference all of your past chats to provide more personalized responses. Now, memories are not a new feature. Chat was the first company to ship these. Now, you have them in most others app. A quick recap is basically if you share something in a chat and it feels personal and relevant to you, ChatG will pick up that information and save it inside of these so-called memories. By default, this is on and here in the settings on the personalization, you can also manage them. So, as you can see, there's one right here. Now, the big change now is this new option down here, reference chat history. So, from here on out, it's not just referencing the memories it saved about you, but also the entire chat history that you have here on the left. All of these. And I tried to briefly explore how far this dates back and it had context of chats from like 10 months ago. So you can kind of think of this as being aware of everything you chatted about ever. If you follow this channel, you might know that I think the memory feature is a fantastic beginner feature. But for people who are intermediate to advanced, you have to be careful. If you're crafting specific prompts to do a certain task reliably, then these memories are extra context in that task. Okay? In many cases, this can be fantastic, especially if you're using simple prompts. That extra context is going to spice it up. It's going to make it more personalized. Chat GBT feel so much better. That's why they have this on by default. And for the vast majority of the users, I think that's a good idea. But people like you who are watching this channel, who are trying to stay up to date on all of this and most likely have tried more use cases and add more context to their prompts. Those are the people who need to be wary of memories because they can mix in context into your prompting into your use cases that you don't want there. For example, if you look at the chats from my previous 30 days, you can see some of them are about my YouTube channel, others are about speech, text, and voice assistance. Then there's jetlack recovery tips, studio giblly style generation, some research on t-shirt sizes. If I keep it simple, it slots into two categories. One, my work context relating to education, YouTube, speaking, and secondly, my personal context relating to jet lags and t-shirts. Now, if I had memories on, it would pull from both of these and they're different. So certain information about me is kind of standardized, like my name is Eigor. I want the memories to have that, and that's fine. But other things like me considering t-shirt sizes and giving feedback on that. I don't want it to pull memories from that. And sometimes it just does. And I have an issue with that because when I try to do a very specific use case like strategic planning for my business, I don't want the context from the t-shirt sizing conversation to be infused into that strategic planning session. And now with this new feature, they're going to pull even more context from everything. So this is going to amplify this unwanted context pulling if you use chat GPT a lot for a lot of different use cases. And that's why I make kind of a big point out of it here. I think everybody who's an intermediate to advanced user really needs to be aware of the fact that this is in there and that it's on by default. And I really think that for workrelated tasks, the best workflow is managing all of your context manually and using something like the custom instructions. Now it's called customize chat GBT here where you really get to dial in everything. Now a good workflow would be getting started with memories just leaving it on and as it collects different pieces of information about you taking these memories from here from the save memories and putting them into your customized chat GPT field where you have full manual control. Now there might also be a lot of users where just leaving this on might be a good idea. I think for anybody with I don't know less than 50 hours in chat GPT probably just leave it on and you'll be fine. But be aware of the fact that it is there now and don't be surprised when it references conversations from like four months ago. All of a sudden it does that now and that's a good thing. That's a step forward for the usage of these products for the vast majority of people. Now a final notice that this feature is rolling out to pro and plus users today. As you can see I have it already here and team enterprise and edit users are getting this in a few weeks. EU being exception not getting this at all. But I just wanted to make the segment to point out that this is not just a positive feature for everybody. If you're doing specific things, you got to be aware of this because it might just be smuggling context from past conversations into your new conversation, warping the outputs in ways you might not expect. So, if you run this prompt that they suggest, describe me based on all our chats, make it catchy, then it uses what I personally have instead of customized chat GPT. Now, if I go to the settings and turn these custom instructions off and turn memories and reference chat history on and I start a new chat, well, it will not know these basics that I set up manually. Now, it will reference all my past chat to describe me. And that's a good description, too. I would just say that the first one is way more accurate to what I actually do in my work because I set up the custom instructions to do that because I use chat mainly for my work. This is a real difference and you just need to be aware of this. All right, so there's one
5:35

HubSpot

question that I get asked constantly. Eigor, which AI tool should I be using right now? And look, I get it. There are literally thousands of options out there and picking the ones that will actually make you productive for the situation you are in can feel impossible. And for most people that have busy lives and problems to solve, it actually is impossible. That's why I'm excited to announce that we're partnering with HubSpot on this video because they created this fantastic resource that you can get for free called the AI productivity stack. Basically, it's a curated guide of the 50 best tools in the space right now to boost your very own productivity. And it's absolutely free. Many of these tools are actually ones that we rely on internally at the eye advantage to put together videos like the one you're watching right now. Tools that optimize decision-m, streamline your processes, or even automate parts of your workflow. The I productivity stack breaks down everything you need to know clearly and concisely which I and most viewers of this channel really appreciate. I know that every time I open a resource and I see hundreds of pages full of text often kind of defeats the purpose just given information and this guide does that. So if you're ready to up your productivity with AI today, click the link at the top of the description to access your free copy of HubSpot's AI productivity stack. And to round this out, I just want to say a big thank you to HubSpot. They showed us a bunch of their resources and I have to say they do such a good job at curating them. So genuinely when I saw that I was excited to actually build it into this video and I always love these partnerships that actually adds to the content we make. So go get your free guide and now let's get back into the next piece of AI news that you can use.
7:01

Midjourney V.7

Okay, next up we have the release of MidJourney V7 that came out end of last week. So it didn't make it into the cut off for last week's video so we will feature it here. H this was a very anticipated release. Everybody was looking forward to Midjourney making the next big move in the AI image generation space. They were probably the first tool to really go viral and where people were stunned by the quality of the outputs. And now they came out with their newest version V7. Now I don't think there's any other way to put it than to say that this was a disappointment to most people. The hype was very real and maybe the expectations were a little overblown because most people expected them to push the image generation space further than it is right now. But that happening around 2 weeks after Chat GPT released their image generation feature that went gigafiral, probably the most significant mainstream AI moment this year. Midjourney adding a few features that I'll talk about here briefly just didn't cut it for most people. And I think it does make a lot of sense to me because at the core of the Midjourney value proposition always has been this idea of this utility of it generating stunning visuals, the most pleasing and aesthetic visuals that you can get from any image generator. And it still does that. It's incredible. Did it get better at that? M I don't know. I personally don't really think so. They added amazing new features like draft mode where you can turn on your mic or type and it just instantly generates what you're talking in the form of lower quality images. That works super well. But the quality of the base model didn't get much better, if better at all. It still has many problems like thumbs still don't work properly as you can see in this image. But the biggest thing is that this core value prop of it being aesthetically pleasing kind of plateaued since V5. In my opinion, it is already excellent. It's already a 10 out of 10 in terms of aesthetics. Now, how are you going to top that? You can't. You can add new features, but that's not the reason that 98% of the people use Midourney. Now, to get a more concrete feeling and opinion of this yourself, we ran a bunch of test images. And if, for example, I review these cinematic stills, they're still stunning and they're still the most aesthetic image generation model out there. I think that stands, but it doesn't change the game for them. And most people will probably fine with the Chat GPT output or one of the others. You can see some comparisons to some of the other best image generators right now. And if you want to kind of just work with an image, the text interface of chatt and the editing capabilities are just superior. So yeah, there you go. It's still stunning. It's still amazing, but it's not really the upgrade that people were hoping for. I just think for most users, chat image generation does almost everything they can think of, but it's a solid update for all mid journey users, particularly the draft mode, is something that is a lot of fun to work with. It almost generates at the speed of your own fault as you get to talk to it. And I personally think that's great. I just think that most people will prefer a tool that is almost as good but free and more intuitive over V7. So there you go. That's my take. You can make up your own mind in terms of the quality. And let's move on to the next one. Next up
9:43

Genspark

there's a new agentic tool. And I know, I know there's a million of these coming out, but from our brief testing, which I do have to point out, it is very brief. We just stumbled upon this and we tried a few use cases. This one called Gen Spark Super Agent actually works better than both Operator and Manos. It's another Chinese company. And my favorite part here, and the reason why I'm also featuring it, and also the reason why we featured it in a newsletter as Apple of the week, is that there's free credits for you to try this right now. Now, this will probably not last forever, but you can log in with a Google account and just give this a shot. See how this works for you, which is not the case with any of the competition. Manos is now rolling out to the wider public, but the paid plans start at $39 a month. ChachiV operator is still behind the $200 pay wall with the pro plan. And from what we've seen so far on a few use cases that we tried here, Genpark Super Agent actually works better than both of those. Now, I wish I had an extensive testing set here that I could present to you. Maybe if we use it more and consider it worth it, we'll make a dedicated video for that. But this might just be the most capable Aentic consumer product out there right now. And you can try it for free. That's really impressive. So, for example, if I wanted to try a deep research task like this, they have all these different presets here. You could just go on the site, login with your Google account. If I give it something like find the best camera store in Japan, you will see it work through the step by step. So, if you just run the simple request, you'll see it working with different searches and doing a more exposed deep research process here with web searches and map searches and tool usage and then summarizing all that with a thinking model. And this is one of the most basic things you can do here. And it has all the different video generators built into it, all the various image generators. And they're super new, but they have these presets here where you can use the agents to call up certain numbers to do things for you. I think that one is sort of interesting, but only works in the US and Japan. Oh, which makes me think I could actually use this to book a restaurant or something. No, if I do that, I'll report back next week. Okay, next up, we have
11:29

Vapi AI

application that really pushes what users can do with voice AI agents to the next level because this site called Vapi, which I guess stands for voice API, allows you to combine multiple services, which if you've been following the show, you know we've got so many new ones recently. And what it practically means is you can use the brain of a model like Sonnet 3. 7 and the smartness of it, but the voices from GPT40 Transcribe, which are really good and sound more authentic than a lot of the competition. So, let's just open a dashboard here that you get if you just log in. And I know that most people won't be using this, but I think it's interesting to point out that voice is really moving forward super fast. Fun fact, from the research I did on a lecture I held recently on voice assistance, did you know that voice transcription accuracy improved by 5x since 2012? So if you ever talk to one of these artificial support agents at a bank or something like that, for example, and you had a terrible experience. They asked you a question, you responded yes, and then it kept asking if you actually want that and went in a different direction. That was because these voice transcription services were just not that good and not that accurate. But with the rise of generative AI that changed and this platform allows you to combine multiple of them. So let's have a look at that. So you can see right here this makes it feasible to integrate it into your workflows on the provider. Let's say we just pick GPT40. For the transcription service we're going to use something completely different. I mean I can go to OpenAI transcribe or I use deepgram or if you're company in the U you might want to go with Azure to make sure that the data stays in the European continent and it complies with GDPR. A lot of options here including ones like 11lap scribe which we talked about and then the voice configuration can come from yet another service. As you can see this makes it really modular. So maybe I want to go with the human voices here. Those are a bit more emotional than some of the others. And then you could add all these different tools. These are basically the actions that the agent can take. Like the agent could send texts or add data to Google Sheets, whatever it might be. And then many more detailed settings. But once you kind of go for that, you can just hit talk to assistant. And we're going to use a combination of all these services here with the microphone on the computer. Hello. Can you hear me? How may I help you today? This is Riley, your scheduling assistant. Yes, I can hear you clearly. Thank you for calling Wellness Partners. All right, Riley. Um, could you translate this into Japanese? Hey, how are you doing? If you need help with scheduling, I'm here to assist with scheduling appointments at Wellness Partners. Feel free to let me know how I can assist you. Okay, so there you go. As you can see, Riley does not comply with that because apparently he's set up here to only help with scheduling in this big system prompt. So, there you go. I wanted to point this one out because it's not about just better models coming out. It's really about combining multiple releases that we look at week by week and getting the best of each one to build voice assistants and voice agents that just go above and beyond anything we've seen before. Yeah, you can probably expect a combination of services like this and more apps in the
14:10

NotebookLM Update

future. Okay, next up we have update to Notebook LM, which I always love covering because it's just such a powerful app. They did introduce paid plans recently to get the full functionality, but there's a new feature that I just wanted to show you which if you're familiar with notebook, you bring in sources and you can work with those. You can generate podcasts, you can ask questions. It's really good at handling a ton of data. If you have dozens or hundreds of documents, notebook is your friend. But that takes work, right? You have to find, select the sources. There's a brand new feature where you just press a button and it finds sources for you. In that sense, it's a bit similar to something like all the deep research products where you tell it what you wanted to research and it just goes out into the internet, finds a bunch of pages, browses them, and then brings them in as sources and then processes that data to produce a report for you. Now, I say this regularly, and I can't say this enough. I still think OpenAI's deep research is the most powerful consumer AI product available today. So, if you're not doing deep researches recently, you really are missing out. It's just the one release that keeps giving so much value. Point being that in the case of notebook LM, it's sort of becoming a custom deep research where yeah, you can let it find all the different sources, but then you can kind of vet them and add your own extra sources and then process all of that. And then additionally, they also added another feature which is sort of like Google's I'm feeling lucky. If you're familiar with that, it's sort of just a random Google search on any topic. Here they call it I'm feeling curious. And if you're feeling curious and activate that feature, it's going to pick a random topic and find random sources for you. So, probably not that useful, but the first one which is called Discover Sources. Super useful. And if you're a Notebook LM user, well, now you know. Onto the next one. Okay, so this app is
15:40

Cove AI

called Kovi. And I've seen flavors of this. By the way, I want to point out this is not sponsored. We just found the interface of this app and what they did here really interesting. And I want to feature it not because I think it's the most useful thing in the world. Maybe it is. I just haven't been using it much myself, but I find the interface here really useful. And I think the future of a lot of AI products at least when it comes to a few subcategories like crafting marketing materials or learning in tandem with AI like all these AI tutor apps are going to see more of an interface like this and that's why I wanted to show you. So it's called Cove and you can just get started for free. The team has done a bunch of testing on this. I'm just going to try it here myself on a free plan with a Google account. So as you can see here in the demo if you ask to help plan a camping trip to Yuseite for two families with young kids who are excited about stargazing. It's going to do all of these different things and it's going to pop up these different modules. I mean, this is what the road to AGI looks like, right? Sort of just has this canvas and it does different things. And the most interesting part to me is when it needs an application, it just builds it and uses it in the process. So, if you've been using something like chatb canvas or anropics artifacts, that was the first consumer product with that interface. If you ask it for a miniame, a website, or some simple little app, often it produces that app in the artifacts interface. And it doesn't just give you the code. can use the app, play the game right there in that interface. Well, this does the same thing, but it's sort of a part of this big canvas and then the AI can use the tools that it built itself to get further things done. So, here you can see it proposes that it should build a yuseite stargazing guide and then it starts writing the code, but you'll see in a second here that app becomes kind of a part of your interface here. And I just think the right implementation of this can be super useful for something like learning a new skill. I mean, on one hand, you can kind of create a curriculum. In the middle, it can compile the information and visualize it with image generators and more. And then next to that, it could create an app which includes things like flashcards and interactive quizzes based on exactly what you were just learning and practicing. Just a better experience when it comes to things like learning. And that's why I kind of wanted to briefly show it off here. And as you can see, there's a little app that it built right here is not perfect, but it allows you to select a date for the stargazing. Then you can refresh the map. It shows you the exact spots that are perfect for this and it dynamically pulls in the celestial events for the date you picked with preview images and then exact times for all the different planets. And yeah, I think eventually we're going to see the assistants that we use now like ChachiBT and Claude, etc. kind of include versions of this and their own apps once you ask for a specific use case. And the interface for something like planning a trip will look completely different from creating marketing campaigns. But I think each one of those will consist of these modules. And I think it only makes sense to have an interface like this rather than a chatbot. The chatbot is just a starting interface where everybody can feel comfortable with it. If you have a little experience, I think something like this is preferable. And I just wanted to include it because I would expect something like this even out of OpenAI once we enter this GPD5 era where it's not going to be eight different models you need to pick from. It just figures out what you need and gives you the best tools and maybe even interface possible to get the thing done that you came to check GBD for. Okay, on to the
18:33

AIA Public Challenge

next one. For the next story, I want to briefly point out the monthly challenge that we're running with the AI Advantage for the month of April. And the concept is very simple. We're just asking the entire AI Advantage community, including you, the viewer of this YouTube channel, to just remember the moment that AI clicked for you. If you think about it for a little bit, for most people, there is a moment that you can identify. Whether it's taking a photo of your fridge and asking for a recipe, or it's solving hours of work in a simple intuitive prompt that you just wrote, and from then on out, you just start using the product more and more. Now, we talked about this a lot in the community and we'd love to hear also from you and that's why we created this challenge. And the way we kind of frame this challenge is present that aha moment you had in the form of an image that you created with GPT40 and you can submit it here, win memberships to our community and a cash prize too. We absolutely love doing these challenges because not just the people who participate, but also all the viewers learn so much from the entire community. So check that out in the free area of our community. Link is below. All the details are below there and I can't wait to hear about your personal AI aha moment. I'll be sharing mine too by the way. Okay, on to the
19:31

Google Cloud Next

next one. Okay, next up we had Google Cloud Next happening this week. That's essentially Google's developer conference. And I will keep it short here because most of these releases are not available today. But basically the theme of the entire opening keynote was agents agents agents agents agents. They've consolidated a agents agents. They've consolidated a lot of the progress in the Gentic space and they're making it available in a secure way to anybody who's using Google Cloud. That's sort of the most concise summary of the whole thing that I can come up with for you here. One new model that they introduced was actually a music generation model called Lyria. And I thought this was interesting because they are the first hyperscaler to release a music generation model. All of the other companies doing it super well like UIO are small independent startups. And now Google is the first big player to also enter this niche. Other than that, if you want a summary of it, we sent out a summary video in our weekly newsletter, and I'll also link it in the description below. If you want all the details, watching a 10-minute recap of the entire opening keynote is probably
20:25

Project Astra

best. All right, next up, we'll be talking about Gemini Visual Assistant Project Astra. This made quite a few waves when it was announced back in 2024, but it was just a preview video. You might remember them going around office asking questions about different objects and everybody's reaction was like, "Wow, that's really impressive. " Well, now they're finally starting to roll this out to users. It's not clear how this roll out is structured. And unfortunately, I don't have access yet, but what I can do is kind of summarize everybody's takes because they've been pretty unified from across the internet of people who do have access because most people are not in love with this feature. It seems like the video feature is not really video, but it only takes screenshots regularly to run those for a vision module and then give you an answer with the voice assistant. And one article from the Android Police even pointed out that it feels more like a Google lens with a voice assistant attached to it. And unfortunately, most people that have tried this so far kind of agree with the take. It's not this insane game changer that it looked like to be in the demo video. And also, we're at a point where we've had products like the Chat GPT desktop app with access to your camera and access to your desktop. And correct me if I'm wrong, but for my own usage and what I've seen across many other users, it's not one of those sticky things that you want to be using on a day-to-day basis. Giving it the entire context from your screen and camera is often too much. a lot of use cases you're doing, you want to be very selective with what you give it. And giving the desktop and the camera is sort of just fire hose approach. That is not ideal in most cases. I don't know. I'm sure they're going to be iterating on this stuff. But yeah, if you disagree with this, please leave a comment below. Are you using the chat GPT app screen share features a lot? If you are, then great. But Project Astra at this point in time, it's just the same thing, but apparently a weaker implementation that is step-bystep rolling out right now. So there's the update on Gemini Video. And I also want to point out, yeah, we had these camera sharing features inside of Google AI Studio before. This is kind of the polished application shipping to consumers. So, yeah, seems to not be the super exciting thing that most people think this would be once it comes out. Okay, on to the next one. Next up, we
22:19

Higgsfield

have a new video generator that has been making some waves. Now, I've heard of this one a few months ago already, but I believe they were in early access. Now, they released a full-fledged app. It's called Higfield AI. This one specializes in two things. First of all, generating videos of humans that are anatomally correct. And secondly, custom camera movements with a ton of presets. So, we went ahead and we tested this and compared it to some of the other video generators out there. And I haven't seen these results yet, but I've been told that this one performs really well. So, I'm excited to review some of them and then we'll round it out with a few more use cases from across the internet that people have found for this thing. But let's just start with some of the texttovideo examples here with Hicksfield. And as per usual, we'll benchmark this versus some of the competition out there in terms of texttovideo models. And I'll be particularly looking at examples here that include humans because that's what they specialize in. and you'll probably won't be using this model to create landscapes and stuff like that anyway. Okay, first up we have this man with a dog holding up the camera. And I mean, first impression, I've seen a lot of these. That's pretty damn good. Like, particularly the way he puts down the hand is pretty realistic. I mean, if you're super nitpicky, I guess you can find a few flaws with this, but this is a very good result on something that is usually quite hard. How about this one? A lot of models have a problem with this ghost in a mirror type of prompt. And look at that. The anatomy is just fine. You can see that this is a human with a cloth over him. Opening the door looks less great. But yeah, this lives up to the promise of anatomy being right. What about this DJ lady on a rooftop? Not bad. Honestly, this might be some of the best human anatomy we've seen here yet. Okay, what about image to video, though? Because there you give it a lot more context, and it doesn't have to invent parts of it. I really want to see this woman walking on a beach. We'll also put up a comparison on screen so you can see for yourself how this measures up versus some of the other best models out there. But this clip in particular, it always looks kind of messed up cuz there's sand, there's water, there's a dress. Huh. But here, okay, arguably the hat and the face not perfect, but the body, the anatomy just works. Waves are quite bad as usual, but yeah, this thing is really living up to the promise of doing anatomy correctly, which let me tell you is a hard problem that nobody has really solved yet. Okay, lastly, I want to look at an example where we used one of these camera presets. In this case, a 360 orbit. That's when you place a camera in front of somebody and then the camera sort of rotates 360° like this. Um, by the way, here you can kind of preview my little setup in a tiny room in Tokyo. Okay, let's get back to it. Okay, let's have a look at how Hicksfield handled this orbiting motion of a closeup on a face. Hard problem right there. Oh, that's quite good. Definitely better than anything else we've seen here so far. So, this tool looks really impressive. So, we actually went ahead and looked at a few more use cases, but look at that. This is really impressive. Various stylistic options and creative camera motions, even doing some of the most complex movements for which you needed camera robots that cost hundreds of thousands of dollars that are precision controlled. Now, with this tech, you can recreate that. I don't say this often, but if you're creative in the video industry, don't sleep on this tool. Okay, onto the next one. So, this
25:05

Copilot Update

is an update to Microsoft Copilot. You might be familiar. That is Microsoft's implementation that a lot of corporations have access to. And the main selling point here is it maintains data privacy and is therefore okay to use for many organizations. In the advantage community, we have a lot of executives or people working at many corporations where the only model they have is copilot and it's just not even close to chatbt in terms of functionality. But many people on their company devices don't have a choice and are using co-pilot. And a bunch of upgrades have been announced. Mainly, they're adding memory, which is sort of this automated custom instructions feature that learns about you from your chats. Fantastic beginner to intermediate feature. More web browsing features, which is fantastic. The search integration really helps reduce hallucinations and add sources to the data rather than just chat making something. And also actions where it connects to other applications. These are really capable features that are a welcome addition, but I have noticed that there have been some miscommunications from various outlets that a lot of the announcements this week around the Microsoft Copilot upgrades have been incoherent. Some outlets report that it's out now available to everybody. From our testing, that's just not the case. We tried everything we could to get access to this. And although some of it is supposed to be rolled out already, no matter what we did in terms of VPNs and loginins and different plans, we just couldn't get access to any of this. We really wanted to see these co-pilot actions in action. No pun intended there, but I guess that's sort of just kind of line there. So yeah, if you're a co-pilot user, you have new features to look forward to, and hopefully you'll get them over the coming months. It's not quite clear what that schedule is, but I did want to report on this cuz many people are using Copilot as it's their only option, and it's about to get a whole lot better with some of the essential features. And that's really all we got for today. I'll see you next week from Osaka. Again, my personal Instagram will be linked in the description below. I haven't been using it much, but on trips like this, it's kind of nice to share a few stories. Okay, my name is Eigor, and I'll see you soon.

Ещё от The AI Advantage

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться