I'm going to be editing MindDeck over here. allowing users to add their own custom MCP servers, and use from the existing MCP server templates. And basically, in MindDeck, you can have many different chats in parallel, to many different LLM providers. You can use any provider that's available on Open Router, which is literally hundreds of providers in parallel. There are nested subchats as well. And everything happens locally. So your API keys that you enter are stored locally on your device. There is no backend. And you can see this yourself by going to Inspect, and then going to Network. And then if you say something like, Hi to your chat, then you can see your device makes requests directly to the OpenAI endpoint. There is no backend that's like funneling all your requests, which means that's privacy first as well. And if you want to, then you can import your chats from ChatGPT. And I personally use it every single day, across literally thousands of different chats that I have. I'm always adding it to it, and there's a coupon code linked down below for the one-time purchase that it is. Anyways, I'll be using HyperWhisper to describe the changes that I want. Hey, so basically, I want you to add a new section of the website, kind of like the models section, which allows users to add their own MCP servers. There should be some pre-built, pre-added MCP servers, mainly the Tavily one, whose documentation I'm about to give you. But you should allow the user to add their own MCP servers with their own headers and URLs and so forth. And those MCP servers are injected into the OpenAI responses API that is being used. I'm going to give you the documentation for that as well, of how that should be combined together. You should also add a model for this as well, to store the MCP servers that the user has added in Dexie. Annoyingly enough, when you start a new Codex session, then it switches back to GPT-5 Medium, so you have to switch back to GPT-5 High, and I'll do that. And I'll also get it to search online, because I'm quite interested in how good at searching capabilities are. At least, we have a recent paper from Salesforce AI Research, called MCP Universe. GPT-5 is really good at web searching, compared to Claude 4. 0 Sonnet. Claude 4 Opus is not on here, but I imagine it wouldn't be much better than Claude 4. 0 Sonnet. If OpenAI Codex did have a planning mode, then I would use a planning mode right now, but since it doesn't, I won't use a planning mode for either agent. One thing I do want to mention when I'm watching these models work, is that Codex has a really strange way of searching the internet. Firstly, it uses Google over here, and then later it switches to DuckDuckGo for some reason. And then later, it tries to do it via Python over here, and using like requests. And yeah, it's really interesting watching it go. Okay, so it seems that both of them are done. Codex CLI did use more tokens as well, it used 130,000 tokens, whereas Claude Code used 72,000 tokens. So I expected more tokens, it probably got more context from online or something, so I'm hoping the solution will be better, but we'll see right now. So I'll run both of them. Here is the one from Codex CLI first, and it has an MCP tab over here. It has a list of MCP servers. So it has Tavily, twice. It has an add MCP server over here. So let's actually check if the MCP server works properly. So I'm going to have to add my API key. And yeah, it's quite nice. It added an icon URL over here as well, and it lets me delete some of these default servers too. But now the sidebar has disappeared, so that needs to be added back. So now that's loaded, we can have a search online and check. So we can say, who won the first FIFA World Cup, search online. And then using the responses API, it should search online hopefully. And I should really add a thinking thing. So did it actually get the information online? We can check the response. And yeah, you can see the MCP tool call was made over here. I wish I had updated the UI as well, but I should have told it in my instructions to update the UI. But you can see the MCP tool call was made using the API key that I provided to Tavily, and then it searched with Tavily, and then got the information like that. So overall, I am really impressed with the solution that I gave. Although I wish, just like in the model section, you have the sidebar over here. It had the sidebar in the MCP tab over here. And it like kind of copied the design over, because it seems to have come up with a slightly different like, padding, color scheme, and design. But yeah, as for the underlying logic, it works really well. And now this is the one that Claude Code gave us, so we can go to MCP servers. It did copy the design over, like the design is very similar to the one in models over here. It did add two Tavily servers as well. I'm not sure why it added two. And there's a nice on button and off button. I wish it allowed me to delete them, because there are two of them. But I guess, can delete them somewhere else maybe. And Claude Code does seem to have a better distinction between the built-in servers, and the custom servers as well, compared to at least Codex CLI solution. And it has a tool approval mode, which is really good, because actually, Codex CLI did not implement a tool approval mode. So I wonder if they added some front-end logic for tool approval as well. But anyway, make sure the server is active, update server, and let's try a new chat, and enter our OpenAI key. And now we can open it up in the Inspect Element, go to network over here, press new chat, and then enter in the same message about the FIFA World Cup, and then see if the responses API actually searches online. So it has been provided with MCP tool over here. And if I search the event stream, if I search MCP, then it actually is using the MCP tool. So you can see it's being returned in the event stream over here. And yeah, overall, both of them implemented a working solution, which is really good. I do prefer the design of the one that Claude Code gave when it comes to the MCP service page, because it copied over the existing design, and it considered things such as the tool approval, which Codex CLI did not consider, even though Codex CLI did use more tokens. But something that I do like about Codex CLI is that it gave a nice questions for you section over here. And it said, like, here are some next steps that I can quickly take. And this is a thing that I noticed with GPT-5 in general, whenever it ends a message, it usually has some kind of, like, follow-up question or just something that I can do later on. Okay, now we'll be moving on to HyperWhisper.
There is a coupon code down below for lifetime access to this if you do want it. But basically, when you see it right now, the design is kind of, like, basic in the sense there is a fast Fourier transform being applied to your voice as you're speaking. And there's kind of, like, a rainbow that's, like, swashing back and forth as you're speaking. And I kind of want a design that looks slightly more like this over here. So there's kind of, like, waves, and these waves go up and down, and there's, like, a neon, like, glow around the waves or something like that. Basically, I'm going to pass it the screenshot, and then I'm going to see how well each of the models do when it comes to updating the UI for this. Okay, it turns out you can't just paste an image into Codex CLI. You have to attach this, like, image command over here and then give it the file path. So I'm going to do that right now. So now I'll just type out the prompt and say, hey, so basically I want you to update the audio visualizer for this application. Currently, I'm using a fast Fourier transform and showing the peaks and the waves as the user is speaking. I want to update the design to use this kind of design in this image instead. It should be like neon waves with a glow surrounding them. They should be able to go both up and down, not just up. And, yeah, see what you come up with. So now I'll copy over this prompt to here as well and then press enter over here. So this was slightly annoying because you can't just drag and drop an image. It has to be there with, like, the initial prompt. And it seems that someone did open up a new pull request on GitHub adding the feature to drag and drop images, which hopefully goes through. But, yeah, I think overall Codex CLI does not have as good functionality built in by default. It's, like, very limited in the amount of features that it has right now. Okay, so both of them are done. Claude Code took 54,000 tokens whereas Codex CLI took 50,000 tokens. So I ran about the same. But the tokens would be priced differently if you are using the API instead of having a monthly subscription. So that's worth taking into consideration as well. Anyway, we can use Xcode to then run both of them. We'll start off with the Codex one. All right, so this is what the Codex one looks like. So if I press this then allow to my microphone. It looks pretty interesting. It does have the neon effect and it does kind of, like, move and react to what I'm saying. You can see it, like, kind of dies down as I stop speaking. But I don't like these small bumps that are going across it. It's strange. But I think the effect is very cool. I'm actually quite interested or excited to see what Claude Code came up with. And then press Control-Shift-I. And yeah, Claude Code did not do as good of a job certainly when it comes to SwiftUI. I think I'm going to hand it to GPT-5 for this one. So I think the tweet that I mentioned earlier where this person was saying that GPT-5 High for SwiftUI was really great was actually correct. So I think I'm going to iterate on that design with GPT-5 High and make it look better and then update the application. And of course, if you want to use the application, then you can download it using the coupon code and the link in the description down below. Like with the other application, your requests go directly to OpenAI. So any post-processing that happens on the audio or the speech-to-text of the audio itself is done by OpenAI. But you can also download any of these models offline. So if you go to the model list over here, then you can see these are all different whisper-sized models and you can download bigger and smaller ones and multilingual ones depending on your own use cases. And then you can create modes out of them. Like I have a Japanese mode over here. And of course, you can switch between the modes and