Let me show you how I built this really cool multi-user dinner coordination, AI enabled chat system using Google APIs for the places and the map, using Anthropics LLM for the AI, and then using convex real-time database for all of the database work. Convex is the sponsor of this video, and thank you so much to them. Technically, I gotta tell you, this project would've been a real pain in the butt without convex. So let me show you how it all works and give you the code and then you can try it out and build your own multi-user chat enabled application. Let's get right into it. Alright, so I don't think a walkthrough makes a lot of sense in the world of LLMs and agents and agentic coding. What I do think makes sense is actually just kind of walking you through the application and talking about the high level architectural points so that when you ask your agent to do stuff on your behalf, you understand what's doing. So let's first talk about what the application is and what it does. Then we'll talk about how to get it set up and it's super easy to set up. And then we'll talk about how it all works underneath the hood. Let's go check out the application. Here we've got two different users simultaneously, we've got Jacksone and Joe, and they're both in the same chat, sharing the same thread, and each of them has a UI that's synchronized using the convex realtime database. So on the left hand side, you've got the chat, then you've got the restaurant map where the restaurants that we want to go to, where, where are they in town. Here we're showing Portland, which is where I live. And then you've got a short list for voting, which actually it's a genuinely kind of cool app. All right, so let's talk about how to get it set up. The first thing you wanna do is click on the link in description right down below to get the source code from GitHub You are gonna wanna bring that into your IDE or I guess your Claude code if you're gonna go that way. First thing you wanna do is go and set up your environment variables. So after you've done your install, you're gonna wanna do a convex init and a convex dev to go and initialize those convex deployment variables. And you're also gonna want to get a Google API key, so you're gonna need access to the places API as well as the JavaScript maps, API. You're also gonna need an anthropic API key. We're gonna use haiku. Kinda keep the price lower. This is a TanStack start application. It's what drives the ui. Where you do need that is actually up on convex, because it's convex that's actually going to be running all of the agent stuff. So let's go take a look at the convex configuration. Now convex is open source. You can run it locally. I'm using the convex cloud hosting, and as we can see, we've got some data in here. We have the thread that both of the people are talking on, we've got the short list of restaurants, including that taste of Greek that was added, and we got the votes, two votes for taste of Greek. So I guess they're going to taste of Greek tonight. Actually, I've never been. I should go. Sounds really good. They've got our functions. These are the functions that can be invoked from either the client or the server that run in the convex context. So for example, get or create thread would obviously get or create a thread list. List all the messages in the chat is what you subscribe to, to get the list of messages in the chat. And you see some of these functions are locked like these places, requests. Those can only be run by functions on the server, which is really cool because that means that the client can't run those functions. So if somebody hacks onto your client, they don't have access to all the different functions, you get to decide what they have access to. And then finally, a really important setup here is that you wanna set the environment variables of that anthropic key and that Google API key, since it's this convex instance, is actually going to be doing the calls to the LLM and to the places API on Google. All right, so let's walk through a scenario. So let's say over here, I refresh the conversation, and I say, hello, what's actually happening here? So let's walk through that scenario. To help us visualize these flows a little bit better, I've created some draw. io diagrams. This one talks about how we send a message, like what we just did. So in this case, the browser is going to call the send message function up on convex. That's gonna be managed by the agent, which is gonna go and add a item to the message list. And then that's gonna get picked up by use UI messages, which is the real time connection. That's the really nice thing here. The browser doesn't need to know that it has to go and get updated messages. It's automatically subscribed to those messages via useUIMessages. That means that when you have multiple people on multiple browsers like we have here with Jacksone and Joe, it's automatic. So all of those browsers are subscribed using useUIMessages, and they all update when those UI messages are updated. That's just part of the nature of the convex realtime updating system. So over in the code, we have useChatInput, and we bring in two things. We bring in useAction from convex React so I can use the server
Segment 2 (05:00 - 10:00)
action and then I give useAction the api. chat. sendMessage, and that gives me back a sendMessage action, which I then invoke with my message. That's it. So now let's go take a look over on the convex code. I look at chat. We can see the corresponding sendMessage function. This is actually running up on the convex side. So the first thing it does is figure out whether that message has @ai in it. So it needs to know whether it's gonna run ai. In this case it doesn't. So what it does is it just calls saveMessage, and then that adds it to the list of messages. And then if we go over to useChatMessages. We can see useUIMessages. It subscribes to api. chat. listAllMessages. You give it a thread id, there's only one thread in this case. And then you say, well, how many items do I want and do I want it streaming? And of course, yes, we do want it streaming, so we want this, we want the chunks to come in and we want to get that AI result in real time. And it's just that easy. So let's talk about how AI fits into this. What happens when I hit @ai? So how about Thai restaurants? We did a sendMessage, sent that up to the server, and then the AI returned with its own messages. So how does all that work? So what's happening here is that same sendMessage went up to convex the agent code, then looked at that @ai and then fired off a request to the LLM. So let's take a look at it in code. So I go back into the convex rectory into the chat file, and now we actually do mention ai. So at this point we're gonna fire off. A request to dinnerAgent with the, with the prompt, the sender name and the content that we just got back. So what is dinner agent? Well, dinnerAgent is defined as a new agent that accesses the language model provided by Anthropic. And Anthropic in this case actually comes from the Vercel AI SDK. I know. Hopefully we can get convex up on TanStack AI soon. Currently, you need to use the AI SDK, but other than that, it looks like, like a standard LLM interaction. You've got your system prompt. You got your tools defined by createTool, and then all the way down to the bottom here, we define our dinnerAgent with a name and then some tools like searchRestaurants, getRestaurantDetails, add and remove from the short list show on a map, which is a client side tool, and then highlight shortlist item, which is also a client side tool. So let's talk about tool calls next. Because tool calls are the only way to really do anything cool with an LLM. Without tool calls an LLM just knows what it knows. It's tool calls that allow it to then go and actually affect change or make queries in the real world, like for example, searching for Greek restaurants. So how does this actually work? It was actually a kind of choreographed little dance that you have to do with the LLM and in this case, the ai SDK from Vercel is actually gonna handle that tool calling dance for us. So the dance starts when the browser sends the message with the @ai. That goes the agent. We then do the streamText that adds that message to the messages array that it then sends to the LLM. It also sends the list of tools along with the list of messages. The LLM then recognizes, oh, I think I need to make a tool call in order to get the data. But it's not actually going to do the tool call itself. It adds a new specialty formatted message to the messages array that says, Hey, you should make a tool. Call this search restaurant's tool call, and then put the result back in the message transcript and send it back to me. At which point I'll look through it and then format a markdown message that actually explains like what I think would be a good idea where you should go and eat. So it starts over here in the dinner agent. When we say that we have the search results tool and then. So it starts over here in the dinner agent, where we have the searchRestaurants tool and then the searchRestaurants tool. You define it with a create tool, and you give it a really good description. That's critical. You need to tell the LLM when and why it should call this tool. And then you give it any arguments that it needs to specify. In this case, it would be the query. And again, you should be really descriptive about what the LLM should send in for that argument. And then you've got your handler. And your handler, honestly, in this case is really easy. All we're gonna do is use the convex run action that's in the context to then call this searchNearbyInternal and searchNearbyInternal is just gonna use the Google API to go and get the places and return that data, which is then gonna be JSON packaged and sent back to the LLM in the message stream. And then transactionally. That's kind of right about here. And then the LLM is gonna take that JSON payload that has the results in it, and then format markdown, which is what we see here. The other interesting interaction here is when you add an item to the short list. So let's say again I want the AI to add Thai Vintage to the short list, and now we can see that we've added it to the map and also we've added the short list. So has that happen? Well, if we go back to the code over in the dinnerAgent there is a tool for addToShortlist. That tool has a great description as well as arguments.
Segment 3 (10:00 - 11:00)
And it's gonna run a mutation to say, I want you to add to that short list. And that's an internal mutation. So that means that only, and then it's gonna run a mutation that adds. the add internal mutation on shortlist. So it's gonna go and add that restaurant to the list. Now the cool thing here is that internal mutation can only be run on the server, which is where this code is running. So that's a great way to secure the API surface of your application. Let's go take a look at addInternal. So addInternal simply does a db. insert just like you would in any normal database. This just happens to be a real-time database. And then over here in our components, under short list, You'll see the shortlist panel. That's this panel right down here, and it's got a useQuery that simply subscribes to that shortlist. It also has mutations for voting and for removing, And it uses the convex useQuery on the shortlist list. That's actually a subscription so that whenever that shortlist changes, it updates automatically. So that's that real time database system in action. All right, well, I hope you give this a try. Download the code for yourself, throw it on your Claude code and get it to do whatever kind of multi-user AI enabled chat you want. Let me know in the comments if you do that. In the meantime, of course. Thank you so much to convex for sponsoring this video. If you like this video, then hit that like button. If you really like the video, hit the subscribe button and click on that bell. You can be notified the next time a new blue collar coder comes out.