So I have a couple concluding thoughts here. First of all, it is very fast most of the time. And whilst being fast, it is also creative as well. So what I did is when I was making the landing page, I got it to come up with 10 different designs. Some of the designs using Tailwind and some of them not using Tailwind. So this is the first design I came up with and that's like pretty basic using Tailwind. This was a second design I came up with. This is significantly better. This is a third design I came up with and I think this was not using Tailwind but of course it looks more like an agency website rather than a SAS website. This was the fourth design I came up with and this was also using Tailwind. This is also using Tailwind, the fifth design I came up with. This is also another Tailwind design. And I liked this the most because it has these widgets over here. And then I told it to expand on the widgets and make something like this. But honestly, this was also pretty good as well. This is another design that I came up with without using Tailwind. And this is more like an agency website, so I didn't go with that. And then in this design, it ran into a small type error, which I'll quickly fix now. And yeah, this is a design. It looks more like a kind of hacking website almost or like a terminal website. So honestly, I'm not going to go for this either. This is another design not using Tailwind. And for these last four designs, I kind of reminded it that hey, you're like capable of being extremely creative. Use that to your advantage. Um, I think I may go for this if I was building some other kind of application as well, but honestly, I was pretty surprised. So, yeah, it does seem very promising when it comes to design. And honestly, you could have it come up with 10 to 15 different landing pages, maybe based on different images, all inspired by your copy or your brand, and then just cycle through them and pick the one you like the most and iterate on that. And one thing I did find out about it is that it's really good at iterating on the designs that it did make itself. I find that in cases of Claude and GPT models, they kind of reach a stagnation point when it comes to iterating on a design that I came up with and it doesn't really make any more changes. But I did notice when I was trying to get it to come up with an onboarding for Hyposper and there's no real on boarding for the application right now. So I got Claude Code to come up with an onboarding for me and basically this is the one that I came up with. It's like super basic. Doesn't look nice at all right now. And then I tried to get Gemini to make improvements to it. So this is the first improvement that I made for one theme that I proposed. Honestly, this is better. Like for example in this case, but it's not good as well. So I was like, okay, can you come up with a different design? And then it came up with this. And honestly, this isn't really good either. So, I think I might get it to scrap the onboarding completely that Claude code came up with and then just say like come up with a brand new on boarding uh with these steps because I find that when it comes to iterating on a design that perhaps another model has written, it just does a worse job than if you tell it to come up with a fresh brand new design by itself. But some of this behavior may also be because I'm not prompting it right because people are still learning how to prompt the model effectively. But when coding with Hyper Whisper, I did notice a few more things. Firstly, it's not really good at writing Swift UI related code. For example, I had to go back to cloud code several times to make these corrections because whatever it wrote just did not compile. But it's surprisingly good at fixing bugs that something like Codex CLI could not identify when given an image of it. And I think in some cases overreachingness of the model can be good because for example in this case it identified some redundant code when it was working on a slightly different task and then just cleaned up that code and like simplified it. And honestly, I think this can be pretty good in my own workflow because I noticed when using cloud code or codeci, it likes to write a lot of redundant code in the sense that it writes something new and then does not remove any existing or previous code that's no longer required despite me telling it to do so. And maybe because Gemini 3 Pro does have a bigger context window, meaning that's better able to piece different things together, it can identify redundant code more easily. or maybe it just fixs in a slightly different way and can identify redundant code that other models have created but that itself has not created. So I'll have to see over time that if it can identify and then remove its own redundant code unlike claude and GBT related models can. Anyways, continuing on I do find in some cases that overreachingness and like the side quests that it does do lead me to start a new session because it feels like that session has kind of been compromised by whatever side quest it decided to do. uh and then it's just like easy to have a brand new session. It does write really clean and structured code which is good as well. It is also really great at getting a project started for the first time and coming up with a pretty complete implementation of that project and I think you do notice this in some of the designs that it does make when it comes to these like voxal landscapes and so forth. They are pretty complete on the first attempt. I find that this is better than other models like Claude and GBT models because Claude and GPT do like to add stubes and placeholders instead of actually finishing off the entire implementation when given a pretty big task. And maybe that's because they're trying to squeeze it all in into its context window and Gemini doesn't really worry about that because a context window is so big, it just does a complete implementation first of all. Or maybe that's related to its ability to have like a big picture understanding of the entire codebase. But I found that even when starting a brand new project like the agent stack project that I'm making right now, I never had to tell it like hey this feature is incomplete or like you put a stop or placeholder here. Another thing that I did notice about Gemini 3 Pro is that it kept getting lost in the types of the project and then I had to switch over to composer one for example to fix all the type issues that it was causing or just not being able to get over and I basically suspect that it has some kind of contradictory behavior in some way. So, like Gemini 3 is very creative, and that same creative behavior is like really good when it comes to designing because it does a lot of things design-wise that you don't have to ask it to do, like the smaller details, but that also means that it can be overreaching in some ways, and it ends up doing tasks that you did not intend for it to do, or just like does random fixes, which can be good or annoying sometimes. And maybe you can prompt it better to like prevent it doing that thing. And I'm sure a lot of us over the coming weeks will be learning how to prompt this model better over time because it does take a couple weeks after release for everyone to know how to prompt a model better. And I suspect that it's ability to have like a big picture understanding of the codebase and the big context window also does make it worse at like really small details like fixing a certain type in the codebase. I found that even when I told it to avoid using any types in Typescript, it still continued to use those any types. So ultimately, I think that many large language models these days do have some kind of contradictory behavior or like a double-edged sword. For example, with GT5 and 5. 1 and CEX, it's pretty bad in design, but it's really good at paying attention to smaller details and fixing types. And I think that behavior may be somewhat mutually exclusive. Like it may be hard to have a model that pays attention to types and can resolve those really fine grain detailed like type related problems and also be really creative as well. and also pay attention to bigger picture of the entire project for example at least not for now. So, I think that I will still be continuing to use Gemini 3 Pro, especially for brand new projects because I found that out of all the models, it does the best job at actually getting the project off the ground. And then when it comes to other things like fixing up types, for example, I'll switch back to Composer or GPT 5. 1 or maybe even seeing when cloud code fits into my workflow in this case because now I think that Gemini 3 is finally usable for coding. Because Gemini 2. 5 Pro, which was released about 9 months ago, I don't actually think that anyone was using it in the last couple months for many things coding related because other models were just significantly better. But now, since Gemini 3 Pro is really good at actually making a fullyfledged project and getting you through the entire process, like I did with agent stack, for example, I think that many more people will be using it more for fullyfledged projects. But I also saw this earlier today where someone from the Google deeper mind team shared some of their best practices for using Gemini free or just prompting it well. And one of the interesting things about this, it will be linked down below is a multimodal coherence. So text, images, audio or video should all be treated as equal inputs. Instructions should reference specific modalities clearly to ensure the model synthesizes across them rather than analyzing them in isolation. So I think what this means practically is that if you share another modality then you have to reference something from the other modality clearly otherwise a model may just like implement it as it is and I kind of have seen this behavior myself because what I did is for hypersper there is basically a support model. So the support page kind of looks like this. And I wanted a similar support page implemented for agent stack for this. And I basically took a screenshot of this page. And then I gave it to uh Gemini 3. And it implemented the exact same thing with the exact same colors and icons and everything like that as well. So yeah, I thought that was pretty interesting. Like you may want to be careful when it comes to actually giving Gemini free images, for example. And when you do, you want to reference what exactly you want for an image. Otherwise, it may just implement it as it is. I thought my prompting was pretty clear. I think I got into a habit where I write prompts a way that Claude GPT understands, but I have to change my prompting style a little bit when it comes to Gemini free. Anyways, that is basically my vibe check of the model. If you do want to buy my application, Hyprosper, there's a Black Friday sale going on right now and there will be a coupon code down below. And of course, if you do want to get access to private beta of agent stack, then the website will be linked down below once I have deployed it. And then you can just contact me by pressing the like button over here. I will continue to work on this particular project with Gemini free pro more to understand how good the model is and learn some best practices for using it and then also integrate some harder features as well by integrating live kit and voice agents so you have like AI agents on your website that can help you sell your products better and provide support over audio instead. But yeah, I will be sharing more my progress with this application in future videos as well. And of course, if you do want to learn more about vibe coding and how to make applications like the one that I'm making, then I share a lot to do with that in my AI startup school, including how to monetize your applications across all these classes over here. A bunch of people have already joined and have seen pretty good success with their own applications as well. There will be a link down below if you are interested in joining.