Warp 2.0 & Claude Code: Which Actually Delivers?
6:47

Warp 2.0 & Claude Code: Which Actually Delivers?

Ray Amjad 25.06.2025 8 376 просмотров 101 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Join AI Startup School & learn to vibe code and get paying customers for your apps ⤵️ https://www.skool.com/ai-startup-school 📲 Stay up to date on AI with my app Tensor AI - on iOS: https://apps.apple.com/us/app/ai-news-tensor-ai/id6746403746 - on Android: https://play.google.com/store/apps/details?id=app.tensorai.tensorai CONNECT WITH ME 📸 Instagram: https://www.instagram.com/theramjad/ 👨‍💻 LinkedIn: https://www.linkedin.com/in/rayamjad/ 🌍 My website/blog: https://www.rayamjad.com/ ————— - Warp: http://warp.dev/ - Warp 2.0 Announcement: https://x.com/warpdotdev/status/1937525185843752969 - Claude Code: https://www.anthropic.com/claude-code Timestamps: 00:00 - Intro 00:24 - Using Warp 2.0 04:25 - Claude Code 06:10 - My Conclusion

Оглавление (4 сегментов)

  1. 0:00 Intro 112 сл.
  2. 0:24 Using Warp 2.0 902 сл.
  3. 4:25 Claude Code 416 сл.
  4. 6:10 My Conclusion 139 сл.
0:00

Intro

So, W 2. 0 released earlier today and it says it's the fastest way to build up a multiple AI agent from writing code to shipping. And we're going to go through some of the update and also compare to cloud code because there are a lot of hype posts on Twitter where someone is like, oh, this is like totally destroys this AI tool or whatever. Like these hype posts are always appearing. So, I'm going to try with some real world examples. I'm not going to go through all the features because you can just read the Twitter f by going through all the features. So basically going back to
0:24

Using Warp 2.0

warp uh it says dedicated AI agent management UI and I'm going to try out with a real world coding example and compare it to cloud code because I find that with a lot of these benchmarks even though many models perform similar on the benchmarks some models like in real world examples just perform much better than other models. So I'm going to use a an add-on that I made and submitted to an web store. And basically in this add-on I want to add like a model drop-own selection menu. So right now when I go to my add-on over here by opening up ENI then in the top right if I go to tools go to AI language explainer go to settings then basically in the text generation area I want a list of the openi models and have the user be able to select any model they want here. So starting out with I'm going to go to the library uh go to the folder over here go to add-ons 21 and then go to the add-on folder which is uh this code over here. So now that I'm in this folder, so the prompt I gave it is, can you make it so that there's a drop down on the text generation tab, which is a tab over here, uh, that allows the user to select the model they want from OpenAI search online and find out which models are available and add them to a drop down. Don't include the reasoning models. And I didn't explicitly say that it should be saved to settings and saved to meta. json because I think it should infer that from the like actual context, but we'll see how well this performs compared to cloud code. So I deliberately gave it a vague prompt for that reason. So let's press enter over here. And now it's going to think and I guess it's going to look through the different files. And now you can see as this agent is running over here, it appears on top right over here. And it says, do you want to do curl? And it says it's going and getting the models available. So it seems it's not available or it can't search properly right now. So as we're waiting for this to complete, some of the features include the ability to attach files, images, URLs, and also if you prefer to talk, you can press a microphone button over here. So this microphone button over here and then you can give it a longer prompt and I think that saves some of the use case where I see some people using super whisper instead and then talking to IDE and I think more IDs or like cursor will also implement like a talking option soon enough because so many people are using super whisper for it and it's pretty interesting that you can now directly edit code differences without switching to another application. And now it says it's going to add the dropown menu over here and it has a bunch of models available on the dropown menu and silently just completely removed GPT 4. 1 instead. So uh let's do apply changes and now summarize the changes I made for us. So we're going to chat with it and say hey can you remove the GPT4 model and GPT4 turbo model and then also add GPT4. 1 search the exact model names online and then enter it just to make sure and then also add a option underneath that says GPT 4. 1 is a recommended model uh that appears under the model drop down as text. So apparently we have to press stop voice input and then I guess it will transcribe. it doesn't transcribe it as you're actually speaking. Uh, so it probably sends it off to a server somewhere instead of doing it locally or instead of streaming it. So, for some reason, it claims over here that GPT4. 1 is not a standard model name. So, that's weird because it's literally available on the docs over here. So, I'm not sure what it did on these URLs when it was getting the data. But basically, it added back the model that we had previously in our code instead of finding it online. And we can press apply changes. And that should be it. And yeah, overall it seems to be finished. So we can close on ankey and then reopen it and see if there's settings. So if you reopen up and we go to tools, we can go to settings and go to text generation and we can see the models actually exist over here. So there's a nice drop down and since it actually changed it in the config as well. If I press save and then close the app again and then reopen it, hopefully it should work. So it should load 3. 5 again. Opening this up again and it loaded 3. 5. So it does save properly and I think the only problem that it had so far is the way it's getting data from the internet. So I think that can be limiting if things are not in its training data. So now what I can do is I can actually uninstall this add-on and then reinstall it. So now I will write
4:25

Claude Code

in the exact same prompt that I wrote in earlier in the video and then press enter and we'll see how it performs. One of the nice satisfying things about cloud code is the fact that it shows you how many tokens it's using. It makes a nice to-do list and I think the UI interface of cloud code is just better than Warp's agent when it comes to coding. And you can see actually performs a web search properly. So it finds 10 different results and I think that's because they partnered with Google or Google's API to actually perform web searches. But the unfortunate thing is when you have this particular agent running then it does not appear in the agents tab over here. It's only when you have the warp agents running in particular that it appears on the agents tab. But if you press new agent then it will basically just open a new tab for us over here. So these are the changes that it's recommended. It put in GPT 4. 1 mini and nano and that's because it was actually able to search internet and do it properly unlike WPs agent did. So we can just press yes to accept these changes over here. So it seems that cloud code made basically all the changes that were required and now it's checking itself against all the changes and it's testing the implementation. So it ran through the Python code to check for compile errors and now it's fetching the data from OpenAI's API endpoint or well the platform docs endpoint to check that uh basically it's available but it returned with a 403 error unfortunately and I'm not sure if that's because of a network setting or something like that. Um but yeah it's nice that it actually tries to fetch the data from the right URL as well. So it seems it's finished and it did it more robustly than warped. Uh which included adding GPT4. 1 as a default model and actually being aware that exists. Um so I think that's quite nice. So we can go back to an reopen again and check that it actually works this time. So we can go to tools, settings over here, go to text generation. And yeah, it has all the correct models. And if I like select one of the models, press save, close the application, then reopen the application, then it should have the same model saved into
6:10

My Conclusion

settings. So going back over here, uh going to settings again, and it did save the same model again. So ultimately, I think Cloud Code did a better job, but it's lacking some of the features that Warp has available, such as having multiple agents running in parallel, being notified of which agents are complete or need actions taking, and also the voice input and a few other things. So it's interesting that all these different companies are competing on these different form factors. You have cursor being the ID form factor. You have warp and cloud code being the terminal form factor. You have like OpenAI codeex being more of a GitHub pull request uh charg form factor. And I'll be pretty interested in seeing the longterm which a form factor actually takes over and becomes the most

Ещё от Ray Amjad

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться