So you wanna learn how to use TanStack's new AI library. Well, I'm a core contributor on that team, and I'm here to show you how to do it. Really quickly, let's get right into it. All right, so this is the TanStack AI landing page, and it kind of tells us a lot about what TanStack AI is. So right here you get this graphic that shows you the value proposition of TanStack AI. So the idea is that we are enabling AI applications. So these are applications that leverage AI as part of a customer feature set. For example, like an AI enabled chat, like to see over here in the chat panel. Now, in order to do that, you need something on your browser. That would be the TanStack AI client. And we've got a vanilla version of that, so you can use it in just JavaScript. There's React Solid Vue Svelte, and we're adding more all the time. I think there's a preact one in the works right now. Then on the server side there is the TanStack AI Library, which can connect to all kinds of different backends. When it comes to your ai. You can do something locally with Ollama or you could use OpenAI, Anthropic, Gemini, Grok, Open Router, and there are some, even some community ones that have been added as well. So lots of different ways to connect to different service providers, but it normalizes that interface so you can use the same piece of code to talk to different providers by just changing out the provider. And speaking of changing out stuff, you don't not even have to use TypeScript. You can use PHP or Python, and they all talk the same chunk language between the server and the client, so you can actually use something like Python's FastAPI, and then also React up on the front end if you want to use that combination instead of say TypeScript and React up on the front end. So enough talking. Let's try it out. So in order to do that, I'm gonna go over my terminal and use Create Start App at the latest version. I'm gonna call this AI test app, and the important thing I'm gonna add is TanStack AI. I don't want any additional examples. So now while that cooks, it's important to understand that TanStack AI is not specifically bound to TanStack Start. You can use it with NextJS, literally any JavaScript framework. Not a problem. But in this case it's, it's just nicer, I guess, to use it on TanStack Start. And of course, I actually maintain this particular example, so I know what it does. All right, let's go. Bring that up in my Cursor. Now, in order to really get started, I need to set a key, unless I'm using like an Ollama. So I'm gonna go and set my Anthropic key Now I've done that. But also it's important to know that this particular example that I'm gonna show you can use OpenAI, Gemini, I think Grok and Ollama. So you really, you can use almost any key in there. All right, let's bring it up and take a look. All right, here's our starting page for our application. Looks really good. If I go over here to the menu, I can select chat, and then from there I can say hello and there we go. We're getting some good response from the AI. Just like that. So cool. So let's go ahead and take a look first at the server side, see what's happening over there, and we'll take a look at the client. All right, so here's our project. All laid out and under a src, and then in routes demo, and then API AI chat. We can take a look at this route file and the really important part is that it is a server route. So inside of createFileRoute, we've got server handlers and we're handling a POST request. And then the first thing we're really gonna do is get the messages from that, and then we're gonna find out what we want to use as the provider to do that, we're just gonna look at our environment and see, well, do we have an Anthropic API key? If so, we're going to use and then we use this adapterConfig lookup table to go and take that provider and turn it into an actual adapter. So what is an adapter? Well, let's go take a look at the imports. So every one of these providers gives us a text export. In the case of anthropic, it's anthropicText. Open AI, it's openaiText, and so on and so forth. So all these vendors have different capabilities depending on their models. In this case, we want text completion. That's how chat works. So we're going to go get that text adapter and then we're gonna pass that to our chat function that we get from TanStack AI. Now, of course in your case, you are probably just gonna want to have a single provider, so let's go and simplify it a little bit. Let's just say that we wanna stick with Anthropic. So instead of all of this really, I'm just gonna go and take the Anthropic text from here and drop it in as the adapter. And now we can see basically what you'd have is your normal API. You got your chat function. That's agentic chat, which we'll get to in just a second. It takes an adapter. That anthropic text adapter also takes the system prompt we have up at the top that tells the ai, you know what you're supposed to do, your instructions. It takes the list of messages, which is the current conversation
Segment 2 (05:00 - 10:00)
as well as an abortController. So if somebody aborts the request halfway through, it'll automatically stop the connection to the LLM, which is really nice and that gives us back a stream of tokens. Currently, those are our own internal tokens, but we're actually going to be moving to the AG-UI token format. That's actually really cool because then our server will be agnostic and put out a known standardized format for the chunks coming back from an LLM, so you can use any client you want or vice versa. You can use our client and use any server that you want that would handle AG-UI. So that's really cool. We're getting close to that. And then now that we've got that stream of chunks, we can kind of convert that into any format that we want. In this case, I'm gonna use SSE or server sent events. So I'm gonna use the toServerSentEventsResponse function, and give it that stream of tokens as well as the abortController. But of course you can move around those chunks any way you see fit. Before we get into tools and the agent stuff, let's talk about what this looks like on the UI side of the house. So we'll go to AI chat. Now, the really important thing here is this useGuitarRecommendationChat. That's a wrapper for the useChat function that we get from @tanstack/ai-react. So ai-react is really just a thin wrapper around ai-client and it's ai-client that's doing all the work of turning those chunks that we get coming back from the server and that server sent event response stream into messages and tool calls and all the stuff that we wanna deal with as application developers, since we don't wanna deal with all of those chunks. The useChat hook takes options and those are. Chat client options. In this case, you get a connection, so corresponding to toServerSentEventResponse. There is fetchServerSentEvents, so that's a fetch wrapper that then takes that service sent event stream, and then gives it back to the chat client, which then takes those chunks and turns it into messages and tool calls. And you also give it a list of tools. It's really important to know that you actually want to go and send it both server tools and client tools if you wanna make sure that those server tools actually get typed correctly in your chat messages. So you see this export chat messages? Well, those are actually typed messages. So when it comes to tool calls, you'll actually be able to get the types coming out of those tool calls. So when you format those messages, you'll actually have the types. So let's go take a look at that. So we AI chat. We can see that we have a Messages component, and those messages have ChatMessages. And we can see that in there is the recommendGuitar tool call. So we look down here as we format those messages, And then with each one of those messages, we'll take a look at the parts inside of the message. So a message might have some text as well as a couple of tool calls that go along with it. Maybe then some more text, some more tool calls. A message is effectively a single message to the LLM or response from the LLM, so it can have lots of parts in this case, again, have a tool call where that tool call is recommendGuitar. And we can take a look to see if there's output. So actually let's go take a look and see how this works. So if we go back over here, we can see that this is a guitar store ai. So I'm gonna say, please recommend acoustic guitar. And look at that. It's actually gone through the database of all of our guitars and recommended this traveling man guitar, which just happened to be an acoustic guitar. So good for you, Haiku. You did a good job on that one. So how'd that all worked? Well that works because of tool calls. So let's go back to our server side for a second. And we see that inside of chat we're giving it two tools. One is getGuitars, it's a server tool. So getGuitars is an instantiation of the getGuitars tool definition with the. server function. So we're saying this tool executes on the server, it just returns a list of guitars. It can be an async function. In this case, it's a synchronous function, but you know, you get the point and that definition. Is actually agnostic to whether it's on the server or on the client. So we can see right up here the definition of getGuitars is that it's got name, getGuitars. Description. You should be really informative in your descriptions here. Then there's the input schema. That's the structure of the parameters that you want the LLM to call your tool with. In this case, there is none, but over on the recommendGuitar tool, there is an id, which is either a string or a number. It, then you give that informative description as well. So it's really important to put those descriptions in there. The LLM really wants to know what is this tool and why should I call it, and what should I call it with. There's also the output schema. So for LLMs that support it, we go and send back that. They're going to get this particular schema from the tool call. So that's how a server tool works. Let's talk about a client tool. So let's take a look back at those hooks, we can see that we have the
Segment 3 (10:00 - 11:00)
recommendGuitar tool client where we, again, instantiate the recommendGuitar tool definition, but this time using client and then it's going to take an ID and then we're just gonna return that number as an id. But you could do something in here, like for example, if you add an add to cart tool. So let's put in return there, clean that up a little bit. And then we'll put an alert in here so we can see when it gets called. Let's go back over to our arc. Let's ask it for an electric guitar this time, and now we get the alert, so it's actually gonna call that tool call, and then we get the display. That's when we format the messages for React. So that's tool calls both on the server and the client. And it's important to understand that. LLMs plus tool calls basically equals agents. So you have an LLM, it can answer questions about what it already knows, but if you wanna actually have it take any action in the real world, like recommending guitars or going and looking at your product inventory, you need to give it tools for that and that's how you give it agency and turn it into an agent. So what happens when that agent spins outta control? Well, to keep that from happening, we do allow for stop conditions. So if you look over here at the agent loop strategy, we say that the maximum iterations, that's the number of times that it can round trip between asking the LLM for something and then the LLM asking it to run some tool calls, and then getting the return and then sending it back to the LLM. It can only do that at a max at five times. If you've got a more complex workflow, then you might want to set that number higher. All right, well, I hope you enjoyed this quick look at TanStack AI. Check it out for yourself. I think you'll like it in the meantime, of course, if you like this video, hit the like button. If you really like the video, hit the subscribe button and click on the bell and you'll be notified. The next time a new blue collar coder comes out.