look at some practical ways to use this new model which is even available on the free plan now in ways that you might have not considered yet. Okay. And in here I want to hit three main points. First of all the interface because it's a bit different than they actually presented. Look in the pro plan I have three different models to select from. Then I want to talk about some examples of its main use case which is development. I have a really fantastic practical example here comparing it to Claude's results on a very big repo that I'm working on. And then lastly, I want to give you my take on what models I will be using for some of my main use cases day-to-day. Okay, so starting out with the model selector. In their presentation, they essentially said that there's going to be no more model selection, but this is not entirely true. If you're on the free plan, which is most people, then you only get GPT5 and you get this additional option to think longer. So, also the fact that you're not going to be able to select its thinking depth is also not entirely true. Even on the free plan, you can always make it think longer, think a bit harder, which by the way caused a lot of confusion with some people because in some cases it's referred to as reasoning. Here thinking. The technical term would be chain of thought. But then in reality, thinking and reasoning in humans is different from what the models are actually doing here. It's sort of just a term that has been used to express that it's going to be circling back to its ideas and reviewing them in multiple rounds. Either way, there's more options when you upgrade through the plan. So, on the plus plan, you're going to have access to the GPT5 thinking, which just takes more time to think through the answer. And then on the pro plan, which is the $200 plan, you get GPT5 Pro, which they didn't even talk about. And actually, I've been using this ever since release. This model is insane. And I think a lot of the opinions you hear online are not based on this one. Like, just keep this in mind. If you're on Reddit, on Twitter, heck, even on YouTube, most people are not on the $200 plan. They're going to be judging this model with thinking longer, probably turned off. And then yeah, a lot of the opinions now are, hey, it's not as impressive as people think. Well, I would say two things. First of all, they're probably not using one of these options. And secondly, most of those opinions that I've seen over the past day come from people who don't do development at all. That is the biggest selling point and the biggest upgrade for power users of this model. And that's why the two examples I want to look at now will be development related. But I really wanted to make this point because you got to take the opinions of people on the internet with a grain of salt. I mean, if I just look at the video that I uploaded yesterday, the opinions range all the way from, "Wow, great examples. Thank you for showing how it actually performs. " All the way to first 10 minutes are useless. But, you know, one of them probably comes from a person who watched the entire 90-minute presentation. So, they have the basics and they do not need a summary, which literally is timestamped as a summary. So, that's the problem with the internet. But, here we look at actual examples. So, let's do that. Let's move into the second part, which is kind of evaluating its ability to help developers. In short, holy guacamole, it's so good. Like, first of all, I managed to resolve a issue that no other model before this could solve. Well, so for a bit of context, I'm working on this application, which is essentially like an operating system for my company. This is just a little preview, but it has these different modules and then I can combine multiple automations and custom tools and prompts, everything that I have into operating system for my people to actually use without them being developers. And this project has become quite lengthy over the past weeks. Matter of fact, it's 27,000 lines of code. Most of it generated with clot code, but I'm at the phase where I'm actually working with proper developers to harden it and make it solid so I can use it for myself and then also help other companies with actually implementing AI. Now, here's the thing. I took that entire repo, downloaded the zip file, and I wanted to harden the module management, something that I've really been struggling with inside of Clot Code over the past days. I tried Gemini, I tried Claw Opus 4. 1, but none of the results were satisfactory. When I ran this prompt through GPT5 Pro and then I gave it the entire application, it fought for 15 minutes and came up with a fantastic plan. If you're a developer and interested in this, you can kind of pause and have a look at this. Point being, this is an excellent and very realistic plan considering where I'm actually at with the application. So to compare afterwards, I went ahead and ran the same prompt through Claude. Now, one problem there was it couldn't even take the entire context of the repo cuz it's so big by now. So I had to leave out certain files. it did end up working and I used 4. 1 to create the same type of plan. Now, it created a good plan, but look, I think this is where it really gets interesting because what I did is I went ahead and asked Chachi5 Pro to compare the plan it made to my second plan that I took from Claude. Can you compare it to the alternative plan that I have, which one is better and what are the differences? Kind of purposefully leaving a bit of ambiguity around which one is better. Then I pasted Claude's entire plan which is right here. Again, you can kind of pause and look at this if you want some of the details. And here's the difference between the two plans. GPT5 met me where I'm at right now with the app. It gave me a plan that, you know, with developers would be executable with a small team, which is in a few weeks or months. Whereas Claude gave me a plan that I would need 6 to 12 months for. And it completely refought the entire application. It figured that hey, I will want third-party modules, different model packs depending on the user, cross module workflows, an entire marketplace. It basically thought of it as a startup and how it could take it all the way to production, which is not a bad thing in general, but it's a terrible thing for me because I'm just trying to make this application work a little better rather than thinking of a project that I would execute with 10 developers. It's kind of sweet that GPT5 Pro also gave me a plan that kind of mixes those two approaches. But ultimately what I was really looking for without even specifying it was this near-term plan to just harden the app, make it better, make things that are there right now work properly so I can move on to the next phase rather than thinking of the big picture which might be like a one to two-year vision for this thing. And I say that because if I took the plan from Claude and I gave it to Claude codes to actually execute on, it would never be able to implement that gigantic vision. Whereas right now, I'm working through this plan step by step with clawed code to actually implement it. Look, I have it on my second screen. I'm working through this plan step by step. I'll leave a comment below on how that went and if it actually fully worked. Again, I've been struggling to resolve these issues with other models, but I think this is a really good illustration of how these models operate and what I noticed in GP5 so far. It's really good at inferring the instructions that you did not give to it and it's really good at adhering to the parts that you did tell it. It respects every part of your prompt. request and executes on it at a level that I haven't seen before when it comes to code. So for development, I'm still using clot code, but I will absolutely be using GPT5, especially this pro mode to plan new phases to review the code that cloud code creates to create realistic plans. I mean, I guess ideally I would have claude code with GPT5, which would be a different product and no, the OpenAI CLI doesn't work as well. Point being, it's insane at development. And to really drive that point home, one more quick example that actually Sam Alman tweeted. It's to build a little beatbot. And I also learned something while testing this in both GPT and Claude. So, first of all, if you go to the pro mode, you cannot create this interactive apps right in here. When I prompt this in the pro mode, it did everything. Even created the song for me that I can download, but it does not have this ability to create a little interface. So, that's just something to consider. And now let's have a practical look and compare it to the same prompt inside of Claude. Okay, so I suppose I could shift things here, huh? And add a little note. Okay, that's kind of sick. Nice. I like the snare drum there. So, this works super well. Beautiful little interface. Now, let's do the same thing inside of Claude here. They've been the best at these little interfaces, no doubt. And even this publish button is something that OpenAI doesn't have yet. But let's get to the app now. Okay, let's add a triple snare here. Okay, so I like the way this is self-contained and it doesn't go off screen. So I'll give that to the artifact. But then also I kind of prefer the simplistic interface here. No, this is more colorful. Both work equally as well. Ah, there's even presets here. So there's a techno preset. trapped preset. To be fair, I didn't ask for that. Usually Claude is ambitious like that and just creates these master plans that it tries to execute on. So, I think that just reflects the first point that I made really well. But yeah, both of them did really well. I mean, if I had to pick one based on this prompt, I mean, with the ability to add new instruments here and the music sounding better, I got to give it to GBT5. Interesting. So, let's round out this segment for now. Obviously, I'll be following up in the coming days and weeks with more videos, more comparisons, but as of right now, when it comes to these five use case categories, what would I be using for writing? I really like GBT5. I think it sounds great. I haven't fully made up my mind it if I like Opus better, but I just use GPT all the time and it's right there. So, I think for writing, it's going to be GBT5. For business use cases, 03 was actually my go-to. Now, with GPT5 being similar to 03 and this ability to do GPT5 Pro in here, from what I've tested so far, I don't see this changing for business, marketing, sales use cases. GPT5 still, it's a slightly better free and I love that. So, there you go. So, I'm going to be using that. For development, it's going to be a mix. I'm going to stick with Claude Code because just nothing matches its agentic ability to multi- aent orchestration, tool use. I'm sure there's going to be more competitor products in the future, but as of right now, it is cloud code. But as I showed and discussed here, I'm starting to kind of wish I had GPT5 inside of Claude Cod. And I think that's why a lot of people are raving about GPT5 within Kurser and saying it's next level. Look, that's the top comment on yesterday's video, for example, with many more reflecting that sentiment all across the internet. So really, it's going to be a mix of GPT5 and Opus 4. 1 in cloth code. From research, from what I've seen so far, I think still Gemini's deep research is the best product on the market right now. And for coaching and psychological use cases, this thing is super empathetic, as I pointed out in yesterday's video. And this is the one where I don't feel fully confident in making a recommendation just yet, but I like the tone of it. I think most people are going to enjoy it, just like they enjoyed GPD40. And from what I've seen so far, its ability to really meet you where you're at right now, having an understanding for your situation, and inferring some of the parts of the prompts that you might not have specified, I'm heavily leaning toward GP5. the other competitors on coaching or psychological use cases. So that's my GBD5 verdict for now. Now let's look at some other pieces of AI news that you can use because there's a lot this week.