Goodbye GPT-5 — GPT-4.1 Beats It Right Now
20:06

Goodbye GPT-5 — GPT-4.1 Beats It Right Now

AI Master 19.04.2025 13 354 просмотров 262 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
#sponsored Learn more about Cimphony AI here! https://app.cimphony.ai/signup?via=artur 🚀 Become an AI Master – All-in-one AI Learning https://aimaster.me/pro 📹 Get a Custom Promo Video From AI Master https://collab.aimaster.me/ In this video, I break down everything you need to know about GPT-4.1 and what to expect from GPT-5. We’ll explore the differences between GPT-4o, 4.5, and 4.1, why OpenAI retired GPT-4.5, and how 4.1’s insane 1 million-token context window changes the game for developers, startups, and enterprises. You’ll also learn how GPT-5 will unify OpenAI’s models, improve memory, boost creativity, and handle complex multimodal tasks with ease. Chapters: 0:00 - Intro 0:30 - GPT-4.1 is out 2:17 - GPT-5 is better? 5:18 - One or Multiple? 6:29 - Prompting differences 7:48 - Context and commands 10:47 - Image Generation 12:15 - “How To” Now and in GPT-5 13:57 - Memory 15:47 - Added features 17:50 - Multimodality 19:00 - Wait for GPT-5 or use 4.1?

Оглавление (12 сегментов)

  1. 0:00 Intro 85 сл.
  2. 0:30 GPT-4.1 is out 286 сл.
  3. 2:17 GPT-5 is better? 534 сл.
  4. 5:18 One or Multiple? 205 сл.
  5. 6:29 Prompting differences 238 сл.
  6. 7:48 Context and commands 549 сл.
  7. 10:47 Image Generation 257 сл.
  8. 12:15 “How To” Now and in GPT-5 310 сл.
  9. 13:57 Memory 342 сл.
  10. 15:47 Added features 361 сл.
  11. 17:50 Multimodality 182 сл.
  12. 19:00 Wait for GPT-5 or use 4.1? 173 сл.
0:00

Intro

The GBT4. 1 just dropped and it's officially the best OpenAI model out there right now. It's also the last major update before GBT 5 launches. The thing with the GBT4 family is that it's already massive and OpenAI usually takes a while to polish each new model. So, who knows when GBT5 will be really finished? Probably not until GBT6 comes along. The real question is, should you hold off for GBT 5 or just dive into GBT4 and become an AI expert right
0:30

GPT-4.1 is out

now? What's the deal about the GBT 4. 1? You might be wondering, wait, what about 4. 5? Right, the naming is confusing. Even though 4. 5 came out first, 4. 1 is actually more advanced in a bunch of ways, especially for developers and big companies. OpenAI released three versions. GBT 4. 1, 4. 1 Mini, and Nano. They are only available through the API, and they have a huge 1 million token context window. That's enough space to swallow around 3,000 pages of text at once. For comparison, 40 only goes up to 128,000 tokens, which is already pretty big, but 1 million is in a whole other level. It also puts OpenAI on par with Google's Gemini models for handling really long text inputs. This is huge for anyone needing to analyze entire code bases, tackle massive legal docs, or search through huge knowledge bases. But it's not just about how many tokens GPT 4. 1 can handle. It's also been tuned for realworld tasks, especially coding. According to OpenAI, 4. 1 outperforms the older GBT4 versions, including 40 and the preview 4. 5 on a whole list of tests. In fact, it's so good that OpenAI decided to retire 4. 5 altogether in favor of 4. 1. GBT 4. 5 was only a few months old, but they're given developers until July 2025 to switch before it disappears from the API and eventually from the user interface, too. That said, 4. 1 still can create images. Only 40 can do that. It also doesn't have a built-in memory feature or anything fancy like canvas. So, you might still need to juggle 4. 1, 40, and occasionally 4. 5 depending on what you're
2:17

GPT-5 is better?

doing. That constant switching sounds bad enough, and the GBT will put an end to that. The plan is for GBT5 to merge the huge unsupervised knowledge base from the 4. x series with the chain of thought reasoning you see in the O series models. Basically, 4. 5 and 4. 1 were the last versions that tried to solve questions in one shot. Going forward, GBC 5 is designed to know when to think longer by breaking complicated problems down into multiple steps behind the scenes. On top of that, OpenAI will fall their latest reasoning model directly into GPT5. So, they won't release a separate 03 model anymore, will just be built into GBT 5 core. GPT5 should also come pack with features that used to need separate plugins or even entirely different models. For example, might automatically do web searches for fresh info, run code or calculations if it needs to interpret images or audio and pull in other tools, all while it's answering your questions. Chat 5 won't bug you about picking GBT4 versus 3. 5 or switching plugins. It will just do all that automatically in the background. But we all know how OpenAI rolls. They release a new model, then keep adding updates and features, sometimes as separate offshoots for a long time. Then, right when the model feels perfected, here comes the next big thing. So, that leaves us wondering what's really the best approach for us users. Right now, we as users definitely need privacy, right? Like sure, CHGBT can generate legal documents, but there's no guarantee that it won't send my data somewhere and it might mess up the formatting or just phrase sentences in a way that would expose me to liabilities. I've been playing around with Symphony for a few weeks now, and I'm really impressed by how it takes the usual legal nightmares, endless paperwork, back and forth with lawyers, surprise costs, and turns them into just a few clicks. One of my favorite things about it is how fast it works. What used to take me days or even weeks can now be done in minutes. And Symphony isn't locked into one region or set of laws. If you are moving your business from California to New York or even to another country, it's got your back. It pulls info from a massive library of regulations. So, you're much less likely to miss any important legal points. And with 24/7 support, I don't have to sit around waiting for someone in different time zone to show up and help me. Business formation, trademarks, and compliance docs. Symphony does them all. I don't have to bounce between different providers. Need an NDA in a hurry? No problem. Looking to build a partnership agreement? Piece of cake. The AI makes sure everything is legally sound, which frees you up to focus on running your business instead of deciphering legal jargon. Symphony feels tailor made for small businesses and startups that don't have a big legal team on speed dial. I love that it's detailed yet still easy to follow. I will leave a link in the description, so be sure to check it
5:18

One or Multiple?

out. Like I mentioned earlier, GBT5 is supposed to be this one model to roll them all where everything's under one roof. But is it actually a good idea? Because right now, there's a whole family of GBT4 models. First, you've got GPT40, which is the default model for pretty much everyone, and it's the most polished. It's also where OpenAI likes to test all the cool new features like image generation. Then there's GPT 4. 5, which is a bit more powerful and especially for things like coding, math, and creative tasks, but it doesn't really have any brand new features. And of course, there's GPT40 Mini, which honestly you'll probably never use unless you're on the free tier. If that wasn't enough, there's also GPT40 with scheduled tasks, the regular GBT4 that's about to be retired in a few days, plus some additional reasoning models. Technically, the 01 model isn't part of GBD4, but since its reasoning is going to be merged into GBD5, I guess it sort of counts. Clearly, we've got a lot going on in the GBD4 world right now. So, the question is, does lumping everything into GBD5 truly make things simpler or just more complicated in a different
6:29

Prompting differences

way. Prompting is probably the most important part of these models. We already have a bunch of videos on how to prompt GBT4 for image generation coding and so on. But GBT 5, it's set to take prompting to a whole new level in the best possible way. Right now with GPD4 models, whether that's the original 40, the slightly more refined GBT4. 5 or the extra context heavy GBT4. 1, it really helps to begin with a crystal clear statement of what you want. The model responds best if you just say, "Hey, here's my goal. " and maybe add the style or format you're after. For instance, if you want a quick straightforward explanation, say so. If you need something creative or empathetic, mention that upfront. GPD5 will still like those details, but they'll be more optional. It should be smart enough to figure out your style by looking at your previous chats and memory without you having to spell it out every time. Another approach that's probably on its way out is chain of thought prompting. That's when you instruct ChatGpt to break down a tough problem into simpler steps. for GBT40, GPD4. 5, and GPT4. 1, it really helps. But for the O series, like 01 and 03, it's basically pointless to ask because they already do it automatically. And GPT5, being even more advanced, will just decide for itself how many steps it
7:48

Context and commands

needs to finish your request. Then there is context. I always say you should give the model as much info as possible about your audience, your goals, your intentions, everything. Even with custom instructions, it's still good practice because those instructions aren't perfect. They can glitch or mess things up if you forget to adjust them. So, for all GBT4 models, context is super important. But with GBT5, you might not have to do that anymore. ChatGBT already peaks at past conversations and remembers certain things. And GBT5 is going to take that even further. So, for repetitive tasks, you won't have to repeat the same details every time. would just check its memory and do the same thing you asked before. And what about commands like search the web or format this like a list? They are useful at the moment, but they might not matter much once GBT5 rolls around. At least for web searches. Right now, web searching for GPT40, GPT 4. 5 or GPT 4. 1 is possible, but for 01, you have to turn on deep research every time you want fresh info. That can take forever, like 5 to 15 minutes. GPT5 will probably just have web search built in and running by default. Kind of like Google's Gemini. It'll always have an internet connection no matter what you're doing. And then there's the knowledge cutoff. Chad GBT is basically living in the past. It stopped learning new stuff in October 2023. We're already in 2025. Meanwhile, Gemini 2. 5 has a cut off of January 2025, which is a big leap ahead. So, GPD5 will hopefully catch up or even go beyond that. Prompt and tricks and little tips like this is what every Chad user has to keep in mind. They exist for every niche or thing you do. Are you a developer? Here's a full PDF with the best prompt and practices. The coolest thing is that prompts for different AIs work across the board so you can take tips from our guide for Jasper and apply them to Chad GBT. It's all about teaching you how AI works. For this exact matter, we've launched a new course into generative AI, the fullest 101 you'll ever need. We're constantly adding new lessons, keeping you in the loop with the most relevant info. And we've also got a bunch of AI tool collections with handpicked tools for any use case. And using our AI tool finder, you can find any tool in seconds. We are building Geek Academy for you. So, click the link below and become a part of our AI family. Finally, the quality of the responses. GPT40 might be super versatile, especially if you enable web access, but it's easy to spot that AI style in its writing. GPT4. 5 is better at hiding its patterns, but they're still there if you know where to look. The O1 models are even more natural, though they still have some tails. So, I'm really hoping GBT5 will be next level creative, less predictable, more surprising. Chad GBT definitely needs a boost in creativity and uniqueness. With GBD5, you probably won't need to edit much at all. Like maybe just tweak a word or two and you're done. That's the dream
10:47

Image Generation

anyway. General writing has always been Chad GBT's strong suit. It's been good at that from the start. But image generation, we only got a big boost in that department a few weeks ago with the 40 model. And while it's pretty impressive now, it's still not perfect. That's exactly where GBT5 is expected to step in. The core engine for creating images probably won't change much. So, don't expect a massive leap in quality overnight, but we might get more control over the final look. Maybe something like a built-in menu for resolution and aspect ratio, kind of like mjourney, instead of typing that info into prompt every single time. Right now, the images 4 can produce are already fantastic. It feels super natural the way it blends text and visuals. For instance, if you need a photorealistic landscape, a stylized painting, or even a detailed infographic, all you have to do is describe what you want, and ChatGpt will whip it up. It's also great with references. You can upload an existing image or a sketch, and it'll adapt, redraw, or edit it in new styles pretty easily. It's surprisingly good at keeping multiple characters, objects, and little details consistent through tons of revisions. You can even highlight a specific area you want changed. Say what you want there and the AI updates it on the fly. We've actually used this to generate a thumbnail for our channel's latest video about Sam Alman. Just fed a few images to ChatGpt and it stitched them together
12:15

“How To” Now and in GPT-5

perfectly. If you want to create images with chat GPT right now, you can just say things like generate or draw or create an image of plus whatever you have in mind. For instance, you could ask for a scene with 10 different people, each wearing unique outfits and standing in different poses. And Chad GPT will try to fit that all into one picture. You can also upload a daytime street photo and tell Chad GPT to make it night or turn it into golden hour and add rain or even remove that weird sign and it'll handle it. If you need a specific shape, you can mention 16. 9 or square. And if you want a more realistic look, throw in terms like photo realistic or even specify a camera and lens. This image generation feature already does things older AI models struggled with, but there are still limits. Sometimes it gets tripped up by really detailed prompts or a lot of text. The resolution may not be super high for professional work, and if you're using a language other than English, the text might look a bit off. GBT5, on the other hand, should handle text in images better, letting you place bigger sections of text without glitchy spacing or random letters. It'll also be smarter about framing. If you tell it to expand an image instead of cropping, it probably won't cut off anything important. Overall, Cad GPT's image creation is already strong, and GPD5 will likely make it run even smoother. But if you're hoping Sora will become a part of ChatGpt, don't hold your breath. Sora is built on a totally different foundation, so it's very unlikely it'll make its way into Chad GBT. And let's be honest, OpenAI has pretty much forgotten about Sora anyway. We haven't seen any major updates since it
13:57

Memory

launched. Right now, HGBT model session only has short-term memory based on its context window. If you're talking for a while or have a huge document, the model might start forgetting stuff from earlier unless you remind it. Writing a novel in GBT40, for example, means you have to break your story into chunks or keep pasting older chapters so it doesn't forget the plot. Newer versions like GBT 4. 1 have pushed these boundaries. 4. 1 can handle up to 1 million tokens, which is a huge jump from before. It can even reference previous chats, which is cool. But across multiple sessions, long-term memory is still pretty flawed. If you start a new chat, Chad GPT might not recall what you said yesterday. You usually have to give it a summary and even then it might lose track of what you were discussing. We've all had that moment where you ask for a small tweak and suddenly it forgets everything else except the new request. Hopefully GPT5 fixes that. Still, the basic way memory works won't change. Even if GP5 remembers everything, it'll still be easy to get lost in the details. Digging up the right info among thousands of chats is like searching for a needle in a haststack. So, my advice is to not rely too heavily on the new memory at first. It's still good to be explicit about what you need, especially if it's as simple as adding a few more words to your prompt. One last tip, turn off cross chat memory if you have tons of work chats mixed in with your personal ones. That memory feature is awesome if all your chats are personal because it recalls everything you've told it or asked for. But with work chats, it can get messy. I, for example, juggle a bunch of different work conversations and some of them have nothing to do with each other. All those conflicting details can really confuse Chad GPT. So, yeah, better to switch that off unless you're into living
15:47

Added features

dangerously. If you find OpenAI's lineup of models confusing, well, you're not alone. Almost feels like it's meant to be that way. Each model handles different tasks and has its own special features. I've already covered a few highlights in this video, but I haven't even mentioned some of the big deal tools that are still in their early stages. One of them is called Operator. It doesn't just answer your questions. It actually goes onto websites, clicks around, fills out forms, and grabs realtime data. At the moment, it's only available for pro subscribers in the US, and it's still in beta, so it's hit or miss. Sometimes it's great and can compare prices for things like AirPods on Amazon, Best Buy, and Walmart, or fill out your flight details. Other times, it gets stuck or leaves the page half loaded. Then you've got scheduled tasks. Basically, you can set chat GPT to send you a daily weather report, remind you to take a break from work, or even suggest weekly meal plans. It's handy for little routines, but still feels more like fancy to-do list with an alarm than true personal assistant. When GBT5 arrives, both operator and scheduled tasks should feel more natural with fewer hassles tied to that sessionbased memory limit. Instead of giving operators step-by-step directions, you might say, "Use my safe travel preferences and book my usual 10:00 a. m. flight to Chicago on Friday, and GPT5 would remember things like your preferred airline, seat selection, and budget for scheduled tasks. " GPT5 will likely let you chain bigger actions together, like every morning at 8, send me a meal plan, add the ingredients to my grocery list, and place an order if the total is under 50 bucks. And sure, Canvas is fun, but it's still pretty basic. GBT5 will probably include Canvas from day one in its full form since the same underlying model runs it. Comparing old canvas to new canvas doesn't really matter. All we're likely to see is a few more controls to tweak our text. Watch our chat GBT guide to new features where we teach how to use
17:50

Multimodality

canvas. Look guys, I work with files and for me multimodality it's not just a word and I'm not happy with multimodality in CHBT. On paper, the list looks impressive. Documents, PDFs, images, audio, but in reality there are tons of limitations. For example, there is no support for videos. No matter if that's a YouTube link or video file, Chad GBT can do nothing with it. Or take tables with data like Excel files. You can upload them to 404. 1 and 4. 5 and they will do data analysis. But the 01 model that would actually provide real actionable insights doesn't support anything but textbased images or PDFs. Even cheap can do data sets. Competitors like Claude or Google's upcoming Gemini support way more files, even videos and proper audio. GPT5 must change that. It'll unite all models under one roof. So, the files support should also become unified. I want to be able to upload anything my heart desires and get real results. Data analysis, photo editing, writing, image analysis, video understanding, all of that is coming with GPT5.
19:00

Wait for GPT-5 or use 4.1?

When is GBT5 coming out? Sam Alman indicated GBT5 would roll out in a few months from April 2025. This means if you need cut and edge AI capabilities right now, GBT 4. 1 or GBT 4. 5 for pro users is your best option, but you do have to juggle models around and be extra careful with your prompts. The improvements we're expecting will be life-changing, but you just can't always wait for the next big thing. You still have to master prompting if you want the best results regardless of the model. Learning GBT 4. 1, 4. 5, or 40 will only make you better prepared for GBT 5. The fundamentals like given clear instructions specifying how you want the output formatted or breaking complex tasks into steps aren't going away anytime soon. So, the sooner you sharpen those skills, the more smoothly you will adapt when GBT finally arrives. So join Geek Academy and become a part of our family. Thanks for watching and see you in the next one.

Ещё от AI Master

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться