start with a company that I haven't talked about in quite a while, and that's MidJourney. A company that actually kind of made my channel in the beginning. I actually started this channel by doing a whole bunch of MidJourney prompting tutorials. Well, they've just released their eighth version of MidJourney. Apparently, MidJourney V8 is much better at following detailed directions, has a much better ability to understand your aesthetics through personalization and style references. Images are more coherent and detailed. Apparently, text rendering works better than ever. That's something that MidJourney has historically been like the absolute worst at. They've upgraded their web interface, and a lot of their old parameters that you used to be able to use, you can still use, like chaos and weird and raw. They also have a new HD mode which renders images in 2K resolution. And well, based on their featured image here, it seems like they're going more for like creative images and sort of like fictional worlds and stuff as opposed to realism. Now, I don't want to nitpick too much here, but I find it interesting that when I zoom in on this image, this lady here has only three fingers when, you know, it felt like AI models fixed fingers a while ago. Now, we're going to do some of our own tests here in a second, but so far the consensus online is that this new model has not been amazing. Alex Petrrescu here posted this comparison that he generated. If you want, you can actually pause and read the full prompt. It's a pretty detailed prompt, but the idea here is that the man's hand is supposed to be on fire, right? Here's what Nano Banana generated. Here's what Cadream 5. 0 Light generated. And well, here's what Midjourney V8 created. And it looks like not only his hand, but his entire shoulder and neck might also be on fire. To be fair to Midjourney, I actually copied and pasted that exact same prompt in here. And these were the examples I got. Now, his fingers are really jacked up, but I mean, they're also on fire. I didn't get the whole shoulder on fire thing. Here's another example that I got. Here's another example. And here's another example. So, I feel like this person may have potentially cherrypicked like the worst version to share against these other two, but I can't be certain of that. There are some other examples that they shared, albeit these ones they didn't share the prompt, so I couldn't actually go and do my own testing on. But here's one of like a Ford Bronco generated in Midjourney versus one generated with Nano Banana. Here's one of someone staring into the camera generated with Midjourney versus one that was generated with Higsfield Soul Cinema. This one I thought was hilarious. I think it was supposed to be like the woman sitting in like the back of the car with the hood open, but it like put her separate from the actual car. This is the midjourney version versus what Nano Banana managed to generate. As always, I'll link this up in the description. He did show off some other examples from his own testing, kind of showing off that MidJourney performed the worst. But when you do generate with MidJourney, it generates four images, and we're only seeing one here. So, I mean, maybe they were cherry-picking the worst ones. Jav Lopez here also shared some examples of testing on midjourney. Feel free to pause and see the prompts. These were the outcomes of these like cartoony images that he generated. Pointing this one out specifically that when you zoom in on the hand here, yeah, we thought this was supposed to be a solved problem. Here's a comparison Jav did of V7 verse, I'm guessing, the exact same prompt in V8, which yeah, that arm's definitely broken. But I'm not just going to take these guys words for it. I'm going to test some of my own prompts here. So, for this first test, I'm going to test instruction following. I'm going to give it a very detailed prompt and see how well it follows the instructions. And I'm going to go ahead and submit this. And one thing I will say that I've noticed about this model is it is extremely fast. Like way faster than version 7. I mean, it got most of the stuff right. But if we open up and look at it, I mean, it's not horrible. I don't feel like these keys and this mouse pad are to scale, but it's not bad. This one looks a little bit better here. This one, it just totally botched. Like, this looks horrible. This looks like midjourney V1 or V2 level. Like, this is bad. And then, yeah, this one probably followed the instructions the best with like the 30° angle open and everything, but hard to be impressed with what we've seen more recently with image generators. Let's test text generation. Feel free to pause the screen, but the text we should see is AI won't replace you, but someone using AI will. Okay, I can't help but think this one's actually trying to look like me, but probably not. Just coincidence. All right, let's take a peek at these. I mean, the text rendering is what I was kind of wanting to pay attention to here, and not great. Here's the one that kind of feels like it looks like me a little bit. I don't know why it made it all out of focus when like I'm specifically asking for that text with my prompt. Same with this one. Like, it's clearly putting it out of focus. AI won't replace you, but someone using AI will. That one's probably like the closest. But I mean, just like the details here are just mangled. Like where is this arm going to? Why is there two arms going to the same microphone? All right, I'm going to test something like really weird and random here. A transparent glass elephant walking through a desert made of books instead of sand. Inside the elephant are tiny astronauts repairing glowing constellations. Sky filled with floating jellyfish shaped like planets. Surreal but physically believable lighting and shadows. Extreme detailed imaginative coherent composition. weird 800. I mean, this is like weird infinite, but let's see what it does. Quite honestly, I feel like this is where Midjourney excels. Like, you just give it the weird random craziest things you could imagine and like it kind of figures out how to do that in an interesting way, but then when you want it to do something more specific, it's not good. They were, in my opinion, at one time the state-of-the-art image model. They were pretty much the best you could get. super impressive with every image and now like I feel like they've gotten worse and maybe that's just because we've gotten desensitized to AI images and things like Nano Banana are just like so far ahead of this now. Yeah, I guess this is just the state of Midjourney these days. But that's not the only image generation model we got this week. We got one out of a company that I was not actually expecting to see