You may have thought that TanStack AI was only good for making chatbots. Well, actually, as, as it turns out, you can use TanStack AI for generating all kinds of things, including image, video, text, speech translation, structured output, and more. what you're looking at right here is our testing panel. This is what the TanStack AI team uses to test out all of our libraries. And I'm gonna go over here and show you how to generate an image. Now, to do that, I'm going to use OpenRouter for that, and I'm gonna ask it to generate an image of a cute baby sea Otter, wearing a beret and glasses, sitting at a small cafe table, sipping a cappuccino. Let's see how it does. All right. I gotta say that's pretty good. OpenRouter did a great job with that one Now. You might think, why am I using OpenRouter? Well, one TanStack AI now supports OpenRouter. So yay. And OpenRouter is supporting TanStack, which is fantastic. So thank you so much to OpenRouter for your financial support of TanStack and the TanStack AI team. We really appreciate it. Now let me go show you in the code just how easy it is to do AI generation of images with TanStack AI. This is the AI monorepo. You can of course, clone that and check it out for yourself. Testing panel is where you want to go and load up your. env with all of your keys and then give it a try. I gotta say there's one set of example code where you wanna point your agent to. It's this piece of code right here because testing panel has essentially all of the capabilities of TanStack demonstrated in one way or another in practical terms. So that's why we're looking at api. image. ts here. Now to go and generate an image, we're gonna pull an image generation function from one of our adapter libraries, like in this case, openRouterImage from OpenRouter. So because this example can use multiple different adapters to go and create images, we have this adapter config map that goes between the adapter, so in this case, "openrouter" is a string and then gives us back the image options for that particular adapter. All we're doing in this case is just. Adding an adapter into the image options and then specifying, in this case, openRouterImage, openaiImage or geminiImage. And then to actually do the generation we call, the generateImage function that we get back from TanStack AI. We give it the options that we just defined as well as the prompt. So in this case, that otter prompt. The number of images, and the size of each image, and that's all it takes. The response is a set of images, so let's go take a look at what we get from that. When we get back the results of the fetch, we then call setImages with those images. Those are an, that's an array of generated images. Let's go to the definition of that. That's gonna have a URL base 64 encoded, JSON, and the revised prompt. And how do we use that on the page? Well, we just iterate through those images and then add image tags with the source of that image, and to get that. We do getImageSrc and that just chooses between A URL if it exists, or a base64 encoded URL, if that's the way that the AI has sent back the result. Now of course both TanStack AI and OpenRouter can do more than just image generation. You can do all kinds of different modalities with TanStack AI. As I mentioned before, you can do image, video, text to speech, speech to text, chat. We have it all for you in our libraries. And of course, OpenRouter has it all in their models as well.