Want to get more customers, make more profit & save 100s of hours with AI? https://go.juliangoldie.com/ai-profit-boardroom
Free AI Community here 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553
🚀 Get a FREE SEO strategy Session + Discount Now: https://go.juliangoldie.com/strategy-session
🤯 Want more money, traffic and sales from SEO? Join the SEO Elite Circle👇
https://go.juliangoldie.com/register
🤖 Need AI Automation Services? Book an AI Discovery Session Here: https://juliangoldieaiautomation.com/
Click below for FREE access to ✅ 50 FREE AI SEO TOOLS 🔥 200+ AI SEO Prompts! 📈 FREE AI SEO COMMUNITY with 2,000 SEOs ! 🚀 Free AI SEO Course 🏆 Plus TODAY's Video NOTES...
https://go.juliangoldie.com/chat-gpt-prompts
- Want a Custom GPT built? Order here: https://kwnyzkju.manus.space/
- Join our FREE AI SEO Accelerator here: https://www.facebook.com/groups/aiseomastermind
- Need consulting? Book a call with us here: https://link.juliangoldie.com/widget/bookings/seo-gameplanesov12
AI Showdown: Deep Seek R10528 vs Claude 4 vs Gemini 2.5 Pro!
In this intense AI coding battle, we pit China's new Deep Seek R10528 against Claude 4 and Gemini 2.5 Pro. Witness which AI model excels in building real applications, providing professional-grade code, and delivering superior UI/UX design. Discover the exact prompts used to push each AI to its limit, and find out which one emerges victorious in the most rigorous AI coding tests. Join the AI Success Lab community for access to all the resources and updates on AI advancements.
00:00 Introduction to the AI Coding Battle
00:48 Setting Up the AI Models
01:50 Running the First Test
03:30 Analyzing the First Test Results
05:33 Running the Second Test
06:57 Analyzing the Second Test Results
08:11 Running the Third Test
09:51 Analyzing the Third Test Results
15:09 Final Thoughts and Conclusion
15:30 Additional Resources and Community
Deepseek R1 0528 versus Claude 4 versus Gemini 2. 5 Pro. I just ran China's new version of Deepseek R1 against Claude 4 and Gemini 2. 5 Pro and the results have shocked me. This is a brutal AI coding battle and what happened next is going to be very interesting. I'm about to show you which AI destroys the competition when building real applications and the results are not what you'd expect. You'll learn the exact prompts I use to push each AI to its limit and see which one provides the most professionalgrade code that actually works. And you'll discover which AI model wins the most intense AI coding gauntlet I've ever created. Deepseek R1052A versus Claude 4 versus Gemini 2. 5
Pro. Who wins? Today we're going to be testing them out side by side and just seeing which one creates the best possible output. So, if you want to start using the new update from Deep Seek 1, you can go to open router, then go to the chat. Make sure that you have selected free, if you want to use a free version. I'm actually going to use a paid version just in case the free version lets me down today. And then you can see that we can enable web search or anything like that as well if we want to, but we'll go with that for now. In the meantime, what we're also going to do is we're going to run Claude 4 with exactly the same prompt side by side. And not just that, we'll also be testing out Gemini 2. 5 Pro and just putting each of them to a battle, a coding battle, and seeing which one gives us the best results. And by the way, if you want to get all of my resources on Deep Seek one, the update, how to access it, what it means, etc. Plus, all of my new SAPs and updates on AI, feel free to get that inside the AI success lab. It's a community of 8,000 people, and you can join it. Link in the comments in
description. All right. So, we're going to get straight into this and start running some prompts on this bad boy and seeing which one performs the best. So, what we're going to do from here is we're going to make sure that we have Claude Opus selected. So, we're going to use Opus 4 like you can see. Then, we're going to use Deep Sea Guy 1 and we've got Gemini 2. 5 Pro ready to go over here. Now, I'm just going to select canvas as well so we can preview directly inside the chat for Deca. If we run some code, we'll have to use liveweave. com to preview it. And let's just get this bad boy started. Let's get this AI party started. All right. So, I'm going to run this prompt side by side on each model and see what we get back. If you want all the prompts from today, it's inside the AI success lab. Just go down to the classroom and then you'll find them inside the classroom. There we can compare them side by side and which one performs the best. Now, speaking 100% honestly, I have a feeling that Gemini 2. 5 Pro or Claude 4 are going to be overpowered for this. But at the same time, when I've tested out Deep Seek R1 0528 today, I was so impressed. It was so powerful that I thought maybe we could actually compare them side by side and R1 can give them a good run for their money. Now bear in mind as well DC guy one if you're using the free version on open router number one it's open source but number two you can get access for free and so it's very interesting to compare these models you can use Gemini 2. 5 pro but there are limits and Claude obviously has limits as well so it'll be interesting to see okay which one performs the best and how they perform side by side. Now we can see them all
code in here. It' be interesting to see which one finishes first. So it looks like Claude Opus finished first as you can see. Let's pull this up full screen and see what we got back here. So what we asked for was a drums visual, right? A Dope Bean drums visual, a circular drum machine interface. Tap to beat, blah blah. Let's see what we got. That's just incredible, isn't it? And then we got a little progress bar at the bottom, I think. Let's play that. All right. So, now we've also got the response back from Gemini 2. 5 Pro. So, one thing to note here is already Deepc1 is a lot slower than the other two. But here's what I would say is that Gemini 2. 5 Pro actually failed. So it didn't create anything and there's an error. So we can click on fix error like but just one thing to bear in mind there is actually Gemini 2. 5 Pro failed on that test. So it's going to be interesting to see how Deep Seek R1 performs and what it comes back with. In the meantime, I can just ask Opus to level this up a little bit. And also what I'm going to do with Opus is I'm going to select extended thinking inside the options here just to make it even more overpowered. So if we say to Opus improve the UI, make it more interesting plus fun and enjoyable. Plus crank up the dopamine levels. We'll wait for that to code out. Yeah, Deep Seek one is super slow on these tests. What I'm actually going to have to do, I think, is run multiple tests at the same time with Deepc on because you can see how slow it is and how much time it's taking to get back to us. So, in the meantime, what
I've actually done is run another test in the background and this is for a flashing keyword game. So, let's see what we got back from DeepCar 1. See how it performed, etc. It is quite a lot of code to be fair. So, we're going to take the code from the second test. Copy that and we'll preview it. This is pretty cool though. So, this is another one that Deep Sea Car1 just one shoted like you can see. So, if we spin that to win. Pretty cool. We can zoom out a bit here as well. There we go. All right. Now, let's run that same test inside Claude and Gemini Claude for Opus. It's just it's a go, isn't it? Right now, look at that. So, you got different drum sounds there. It worked perfectly both times we tested it, and that was pretty easy and simple to do. So, we're going to X off that and then we'll start a new chat over here and we'll go on to the second test which DeepC car 2 has already created an output for, but we'll test it on Opus as well. And then Gemini, let's see what we got back. Nice. Total trash. Total
got back. Nice. Total trash. So, Gemini actually failed the first test. It gave us nothing. and Deep Sea Car 2 is that Deep Sea Car 1 0528 actually gave us an error and doesn't seem to have finished the code as you can see. Now, I'm 99% sure I got charged for that. If I have a look at my credits, even though there was an error. So, let's see what we got in the credits. See if we got charged for that. Yeah, look at that. Got charged for using nothing. Okay, thank you. All right. So, Deep Sea Car 1 failed the first test. Gemini 2. 5 Pro failed the first test. Let's see how Gemini and Claude perform on the second test. Claude obviously won the first round. AI store on Twitter says, "Is actually one shot useless? " So far is like on the second test. Deep sea car one 05 to8 worked really well but on the first test yeah I would agree like the one shot totally timed out on us and didn't work at all. It's just a little bit too slow deep R105 to8. That's what I would say about it. So what we can do with another one
is we're going to say build a SER racing game where we fly through in different examples. So if we go inside here a close that we're going to run this on deepseek R10528. So build a racing game where you fly a rocket powered blog post through AI spam sites. Collect schema fuel blah blah. And we'll wait for that to code out. All right. Now if we have a look at Claude Opus in its output. This looks pretty cool. So here we go. Let's spin it to win it. So powerful. It was so powerful I couldn't even handle it. Yeah, it's pretty. To be fair, Deep Seeks was okay, but if you compare them side by side, let's try these out. So this is Deep Sea Car 2, sorry, Deep Sea Car 10528. And this is Claude Opus. Claude Opus wins. Gemini 2. 5 Pro. Where are we up to, mate? All right, we got the code here. Let's see on live what we got. Plug that in. It's not bad, but I don't like the UI. Do not like the UI. You can see the words and everything overlapping right there. Let's run this in canvas. I say now run it in canvas. But yeah, I would say overall in the tests, we compare them side by side.
So again, Claude Opus won by a long way. Like the flashing headlines, the UI, the sound effects, the UI is super nice, the design is good. Great front end and back end right there. Deepseek R10528 competed. It was definitely in the race. Didn't create something as nice as Claude for Opus, but for a free option, not bad at all. And then Gemini 2. 5 Pro, as much as I love it, came in last right there. Exactly the same prompt, but the UI is just not as nice. We can wait for the second version on Canvas. Was a one shot. I'm going to say that Claude, Deepseek, and then Gemini 2. 5 Pro. Bear in mind, Gemini failed the first test. So did Deep Seek and Claude Opus 4 has come out on top for both methods. Right. So if we go on here, we'll have a cheeky go on this. Yeah. So you can see how the text is just overlapping. That doesn't look very nice. We click on spin. There's no sound effects either. Yeah, it's is average. I think Opus wins by a long way right there. So, let's see what we got back on the next test. And over here, we're going to go inside called Opus again. New chat, build a racing game. Do the same inside Gemini 2. 5 Pro. Just make sure if you're doing these tests, like you want to be running on canvas. I always forget to click it. It should be on default, I think. But yeah, there we go. And then Deep Sea Car1, it's the slowest model by a long way. Just realize that is super slow. I think that honestly Claude for Opus will probably create its output faster than deep sea car 1 even though we gave Deep Sea Car1 the input like 5 minutes ago. Now some people ask like why do I use LiveWeave? I always use Livewave for pre previewing like raw HTML right so if you have raw HTML I use LiveWave unless there's a canvas option. So for example, Gemini and Claude, they both have canvas options and so you can easily preview the code that you create directly in there. But if you're using open router in open router, it doesn't have a canvas option as far as I'm aware. And so you have to run the code through live weave to test it first. So we got the output back from R2, sorry, R1 05 to8. And we have the output back from Gemini 2. 5 Pro. So, let's give Gemini 2. 5 Pro a chance first. I still don't like the UI. I don't know what's going on here, but whether Opus is making it look bad, but the UI is super boring on Gemini 2. 5 Pro. But not bad. It actually worked. So, that's okay. Now, let's have a look at Claude's output. So, this is Claude's SEO race game. We'll plug this in. See what we got. Isn't that way more fun than Gemini? Look at that bad boy. You're having the time of your life playing that. It's so trippy. It looks cool, though. Yeah, I like that. That's cool. All right. Now, let's test out on DeepSeek R1 as well. Grab that code. Plug it into LiveWave. And there we go. All right. Here we go. Well, I don't seem to be able to go. Oh, there we go. All right. All right. Top block. It's a weird one. Like, the UI is super weird, but it's fun. If I had to compare them side by side, right? So, for sure, undeniably, Claude Opus 4 absolutely smashed it. It wasn't even close. Claude for Opus won the race. Right. Then, if I had to compare the other two, I'm going to go with Deep Sea Car1 just because I think he's a bit more interested in his UI. Feels like a Mega Drive game. Let's refresh that. It's a bit more interesting. fun, etc. Yeah, I'm going to go with Deep Sea Gan coming in second right there. So, just to recap, test number one, the game. Only Claw for Opus could do it. Test number two, the wheel. Gemini kind of failed a little bit like it was creating something really buggy. Deep Seek R1 came in second, created something okay, but Claude Fopus again smashed it. And then on the last round, Gemini created like a really boring game. Deep Seek 1 created a game that was okay. But again, Claude Opus just crashed it. And I don't think anyone can compete with it to be honest
with you. Like I think Claude Opus is miles ahead of everyone. So that's basically the final outputs from what we've seen. You've seen the tests, you've seen the game, you've seen what you can create with AI now. And for oneshots, Claude for Opus is absolutely the goat even compared to the latest update from Deep Sea 1 and everything else. So thanks so much for watching. If
you want to get all of my free resources on Deep Seek 1, how to get access to it, and also a full recipe on how to build anything with Deep Seek 1, feel free to get that inside the comments description. If you want to get a free AI automation strategy session, feel free to get that link in the comments description to book in a free AI strategy session. Basically, on this call, we can look at your business, look at where you can save the most time, and then show you how we can automate it for you. And then additionally, if you want to get coaching, support, community advice from me, etc. Feel free to get that inside the AI profit boardroom link in the comments and description. Inside this community, you can ask any questions you have. Once a week as well, we make a video for you based on your biggest problems. We also do coaching calls live in there. And also, you get my best automations, workflows, and templates. On top of that, inside we basically show you how to make money with AI, right? So, it's all about showing you how to make money and save time with AI so that you can get the best results possible because it's one thing knowing what the updates are and how you know you can get access to them. It's another thing understanding, okay, how can that save you time? How can you start implementing that today? Right? And that's what the AI profit boardroom is all about. So, feel free to get that stuff link in the comments description. Appreciate you watching.