Watch the recording of our webinar introducing the StreamNative Remote MCP Server, now in Public Preview as part of StreamNative Cloud’s Agentic AI capabilities.
The Remote MCP Server provides a secure, managed Model Context Protocol (MCP) endpoint for StreamNative clusters, simplifying the integration of AI agents, copilots, and automation frameworks with real-time data streaming platforms. It eliminates the need for local MCP deployments, making real-time infrastructure AI-addressable by default.
What you'll learn:
✅ The new managed MCP endpoint architecture.
✅ Live Demo: Diagnosing system latency with AI.
✅ Security boundaries & built-in authentication.
Speakers:
David Kjerrumgaard - Principal Sales Engineer, StreamNative
Kundan Vyas - Staff Product Manager, StreamNative
Slides:
https://drive.google.com/file/d/1SxaJTPzJZwQHgfk7RvqstgYhv5mbfeoI/view?usp=sharing
Оглавление (11 сегментов)
Segment 1 (00:00 - 05:00)
Thought I'd turned it off. — Wait, I see. Yeah, we are live right now, David. People can see your beautiful face. — That's awesome. I don't know why it's notifying me though. I'm gonna have to sign out or something. Self is away. pause notifications. — I'll let uh some more folks join and then maybe I can bring up that poll. — Okay, let's see if some folks are joining which is great. — Bring up the uh first the slides and show the intro slides so people know they're in the right spot. — Oh, I'm not sharing my desktop. Just because you go on the slide mode uh slideshow mode doesn't mean you share. Okay. Thank you, David. — No problem. — That's why it's always good to have a partner on the webcast. — Yeah. — Awesome. Okay. So, I think uh we'll just give another few seconds. So, good morning uh everyone. Good morning, good afternoon, wherever you're joining from. We'll just uh give a few more seconds for folks to join. Actually, while people are joining, if you don't mind, we wanted to just we're curious about a couple of things about uh you know, what you're hearing about UI uh AI in your day-to-day life and what sort of use cases you're coming across. So, I think we might just share a poll with you. So, it'll be great if you can actually participate in it. Let me bring up the first poll. Give me one second. Yes, we'll do the introductions in a moment as well. One moment. Yes, it's getting recorded as well. Okay, so I'm going to uh launch the first poll. Hope folks can see. I can see the response. I know we are using AI a lot for documentation as well. Makes our life easy. Um troubleshooting. I see people select that option as well. So that's amazing. Cool. I'll keep the poll on just for another few seconds and we can end it and get started. Okay, this is great. Thank you so much folks. I know it's 9:03, so we should get started. Let me just end this poll. Let me see if there needs any last minute. Okay, there you go. Let me just end the poll. Thank you. Yes. So, just to comment on this uh very common observation for code generation. Um I don't know about code generation. That's where I think I would get more input from engineering. But I think documentation for sure. Uh and then of course building out tools. So all right. So I'm going to end this poll. Thank you. I'm going to come back aside. Okay. So let's get started. Good morning again. Good afternoon everyone. Welcome to today's session. I'm super excited about today's topic. Uh we want to talk about uh MCP server. stream native u agentic AI vision. what kind of things we're building and we want to give you a glimpse into what's coming in this year and yes so I can excited to partner with David today David uh do you want to introduce yourself thanks good my name is David Kgard I'm a developer advocate principal sales engineer here at stream native a committer on the Apache Pulsar project author of pulsar in action and I'm going to run the demo today to show the MCP server uh solving some real problems so pleasure to talk to everyone here — perfect sounds great thank you David. I'm Kund. I'm part of the product team here at Stream Native. So let's start. So I think we have a few things to cover today. We definitely want to set the context by talking a little bit about the challenges which the industry is facing and how some of the standardization is happening across how people you know um use agents and you know enable access to these agents. Uh we want to talk about our MCP server roll out. Uh David's going to show a cool demo and then we want to talk about more things. We want to talk about the road map of what kind of things we're building also I think a little bit about you know our listing of uh you know MCP server on the databics mark uh databicks marketplace mgb marketplace so let's get started sorry so um just to talk a little bit
Segment 2 (05:00 - 10:00)
about set the context uh I think you know going way back I think almost uh if you think a little bit about five years back almost 60 years back pretty much we couldn't do any of the things like what we can today by just entering a prompt and essentially being able to generate a quick document or write code or do cool things right or documentation as I see how you all are using the tools AI tools to kind of you know be more productive but you know as this journey happened over so many years I think enterprises definitely faced a lot of problems right because all the data like was fragmented within you know different parts of the organization there's no standardized way for agents to kind of you know talk to these backend sources is there's no clear way there's no single protocol which is enabling access to all of that. Uh also particularly like you know governance being the top thing in everyone's mind as to who's really doing what and you know uh who has access to what particularly we're talking about agents not humans here. Um so I think with those challenges definitely uh it it took longer for enterprises to embrace AI and adopt uh and uh definitely we've seen these challenges ourself and coming to the world of uh data streaming same thing applies here like there was no standard way in which agents could kind of learn about your infrastructure whether it's brokers or your backend or any of your microservices right um so I think definitely was a struggle period for most enterprises. But uh fast forward ourselves to kind of the current state and then we have MCP as a standard almost as an industry standard. Uh there are many ways in which people have enabled access uh defined a standard way for these tools to interact with each other. But definitely MCP is one very popular one. We can clearly see the adoption. So the good thing about MCP is that again for the background of folks who have not used this so much before it standardizes how you know if you think all these interactions happen between your backend resources and your sort of agents right um there's a standardized way in which you know the communication will happen between uh your the client and basically your agents and essentially your all your resources which you've enabled for access in con in context of stream native cloud that will be all the infrastructure resources like call it clusters or tenants name spaces or topics um and then essentially learning about what those resources are and interacting with it and also like essentially defining role based access MCP defines a standard in which you can clearly you know control who has access to what kind of permissions you're granting read only or read write and all that stuff so uh we are blessed to have MCP as a standard right now we are all able to kind of adopt that and a lot of what we're going to talk about that is going to be based on uh about how we support MCP and of course we'll talk about our broader AI story as well now again uh these three uh terminology is always important when you're talking about architecture and some of these new concepts uh so I'll be David and I will be using these terms heavily for the folks who are not familiar with uh MCP server we have these concepts of resources, prompts and tools. Again, resources, let's bring it to stream little cloud world is in the context of uh a data platform, your clusters, your topics, your tenants, namespaces, this actual backend, you know, predominantly how you have referred to those resources via your UI API, you know, command line, prompt or terraform now bringing to this beautiful world of MCP compatible clients. Again, it's the same resources. Then prompts are the way you are using human type language. um you know interact with those resources instead of being very specific about you know this is how the rest API is going to call that resource here it's more lenient that's what is the benefit of AI it makes it more lenient for you to interact with these resources tools are you know exactly what uh sort of uh you know particular APIs or uh you know backend uh invocations which you know your prompts are going to call when you're actually trying to access resources so you'll hear these terminology a lot today. So, let's talk about Stream Netwives's uh MCP server, right? So, we've um announced this uh like I think a few days back, this is available within Stream Native Cloud today uh as a public preview. Uh you can go and enable that. We have a new section for enabling previews, looking at all the preview features and in a self-service fashion, you can enable this capability in our cloud. Now this gives you out of the box uh capability to interact with uh stream native uh cloud resources at a cluster level. So right now essentially when you go to stream native cloud and um you know enable this in preview
Segment 3 (10:00 - 15:00)
um you'll have a section for MCP which will let you uh which will give you an endpoint and I think David is going to show you a lot of this. uh that endpoint is what is your MCP endpoint and that's where you configure everything about how you essentially you know uh what sort of uh resource tools you're accessing uh or enabling the MCP to access MCP compatible clients to access uh what sort of uh resources um you're enabling uh the client to access and also whether this is going to be readon mode or read write mode so those are a few things you can configure and then you can use it with any of the NCB compatible client So what is great about this is unlike um the local MCP server which we launched before which is open source you can download set it up and run it on your desktop uh essentially this one is more out of the box you don't have to set up anything you just create a cluster enable it per cluster level right now and then you'll be able to kind of learn and interact with all the resources in the cluster over time we're going to explore a way to see if we can make this at an organization level within stream cloud That way your prompts can you know span across um different clusters and you can you know uh it's almost like a query across all the clusters but right now uh this is where we are uh out of the box capability in stream native cloud available in public preview. Uh go please try it out now. Uh and we'll share more details about the trial and how to get your hands on it at the end. So with that now I'll uh we're heading to the most exciting part of the webcast today. So I'm going to hand it over to David to show us um a cool uh basically walk us through this uh cool capability which we have. So I'm going to stop sharing David. — All right. Sounds good. Let me uh go ahead and share my desktop here. Make sure everybody can see uh my slides so everybody can see these. Get confirmation. That's great. — All right. Yeah. Thanks everyone for joining. Uh I want to start uh by asking like how many of you have been paged by the business on a Friday evening. Maybe you're out doing enjoying your weekend. Uh open up your dashboards and seen a function that's sitting at or your application you're consuming pulsar application that's sitting at zero. Nothing's running. There's multiple restarts. Clearly it's broken. But when you go to find out why the exception that explains it is buried three pages deep in a wall of logs. There's one exception. That's it. The dashboard tells you something that's wrong. you just but it doesn't tell you why. So that's the scenario I want to walk through today. This hypothetical scenario where uh I'll play the role two parts. The first one is you're a back-end developer. So before you leave for a long weekend, you do a quick end of day check. Everything looks healthy. So you head out around 5:00. uh on your way home uh at 5 5:45 a mobile app team uh a completely different team that's a producer on that topic pushes a change that they think is a routine uh schema update um they think it's a compatible it's a forward compatible change u but quickly it's a breaking change right by 5:46 orders have silently stopped flowing uh by 6:00 it's been elevated to say hey you know what's going on it's the dinner rush uh you know something's going on and the Slack message comes in for notifying you on your phone that something's going on from the team, but you know asking you what's going wrong. You're like, "Hey, my code hasn't changed in 3 weeks. I don't know what's going on. " So to diagnose this, we're going to give uh Claude two powerful tools. The first is the stream native MCP server that Kundan talked about. Uh it exposes the full Pulsar admin API directly to Claude code. So Claude can quickly query topic stats, inspect schemas, check function status, and manage namespace policies through natural language without us having to write a single line of code. and not have to memorize what's the syntax for this admin command and how do I change that uh etc. It'll do it all for you. And the second is the SN cuddle tool which is the stream uh stream native command line interface tool uh which we use for one or two operations that fall outside of the MCP service uh specifically uploading compiled packages to the registry. That's something MCP does not do yet. And between these two tool to tools, Claude could go from I have no idea what's wrong to root cause uh confirmed and actually I'll even show you solving the problem uh in just a few uh prompts. Let's go ahead and get to it without further uh ado. Go ahead and move it here. So uh you know here's your pipeline you've had built for weeks. The system you've built uh and has been running for months. It's a food ordering backend. Very simple. It's a pretty simple data flow. On the left you have a food source connector uh generates food orders. uh every 3 seconds. This obviously simulates real food orders coming in uh from different sales channels, online mobile apps, kiosk, self-service kiosk inside the restaurant itself. Uh Door Dash, Uber Eats, etc. piping in all this information, but it's a source where data comes in and drops it into a food orders topic. Um messages are Abro serialized uh using a specific schema
Segment 4 (15:00 - 20:00)
which I'll show you in a minute. Uh here shown down here, this V1 food order, it's a very flat structure. There's an order ID, customer name, an item, a quantity and price, uh, and a corresponding time stamp for when it got entered into the system. And then on the right is an order processing order processor function. This is what you as a backend developer responsible for. Uh, it reads those messages off the food orders topic, does some calculations, calculates maybe, you know, a total cost, does some tax on it, maybe validates the credit card, etc. you can, you know, whatever sort of processing you need to do with these orders as they come in. Uh, and it writes a confirmation string to the order confirmations topic to say, hey, this is a, you know, this is a valid order. Go ahead and process it or maybe they get rejected and it does get into that process. So, it acts as a gate, a data validation thing. Again, very simple for that. And the green bar at the bottom just kind of shows you, you know, this is what you normally see. You see about, you know, three messages uh or one message every 3 seconds roughly coming in and out. There's no backlog, there's no exception, everything is healthy and running. Uh, also notice that here on the in the upper top corner, the schema compatibility is set to full and auto update is off. So, you set these yourself a few weeks ago to protect against exactly this kind of thing that's going to happen. Uh, auto update means consumer subscriptions are pinned to the schema version to register with and that choice is about to become very, very relevant very, very quickly. So, um, let's just go ahead and now I'm going to switch screens. Let's say it's the end of the day. You're the developer, right? It's 55 5 p. m. You're checking out for a long weekend. Uh, again, the pipelines been running for 3 weeks, but before you head out, you want to do a quick sanity check. So, you go, you look at all your different This is the source coming in. Messages are coming in just great. You go check your function. This is your food order uh function here. You can see you have everything looks clean. I've got one instance running out of one topics are consuming. The messages processed equals the message received. There's no backlog. Every message that's coming in is being processed very cleanly. There's no exceptions, no restarts, no anything. Everything looks very good. Again, no exceptions whatsoever. Quickly, you want to peek in the logs just to keep an eye on, right? So, every uh you know, everything coming in uh looks fine. You should see something like uh you know, like the following different orders being processed. receive food orders here. Orders getting processed. So everything is good. Just info warnings or info messages rather. Everything's working just great. Uh you know occasionally the messages coming in, right? So this is good. Nothing but successfully processing uh processed entries. Every order that comes in gets processed. A confirmation is written to the topics. Right? You see topics are coming out. Order confirmations are getting topic messages are coming in and out. No problem. Uh there's no lines again showing errors or warnings or log or any issues in the logs whatsoever. So it's great. You ready to go home uh head out for the night. You know, everything is how you know healthy. It's a it's a sanity check, right? So you're good. Your job is done. Now switching roles for a minute. See if I can get back control of this screen. One second. There we go. Now switching teams. So now there's another team, a mobile app team that uh missed their deadline. They're rushing to get a change in real quick. They've been uh you know, it's around 5:45 in the afternoon now. They're anxious to get out the door as well. It's a Friday. They want to get home. They got lives to get back to. Uh they've been working on a new feature adding uh combo meal support where single order can include multiple items. Uh they've gone through the code review process. The code everything the build is green. Uh it's time for them to do what they think is a routine deployment, right? And they're going to and you know change the code. So just quickly show you this is you know they had this food order source. This is what they originally had here. You can see it's producing food order items. It's generating some random values of what the items are and the prices. Uh and it's always just generating, you know, food orders over and over again and publishing those out to the topic. If you look at that schema type as we talked about before, nothing new here than what I showed you before. Order ID, customer name, quantity, price, and order time stamp. and they're generating these events. Our code, the order processing function is accepting these food orders and that's fine and it's doing some confirmations on it right now. Just print out the details, sum up the total cost, etc. Record some metrics related to that. But again, you could do more complex things. Validation, credit card validation, customer loyalty point adds on, whatever you wanted to do here. But just to show you that this is dependent on the food order type. Now what the mobile team is going to do is like hey we have to change this new schema type right so they are going to introduce a breaking change they have you know they should have you know would be backward compatible in normal circumstances and they're operating under that assumption but in our case we have full
Segment 5 (20:00 - 25:00)
compatibility which is going to be why it breaks but nonetheless they've changed some field names they've removed items item to replace it with items as a list they've removed the price field and put it inside this items as Well, they've changed customer name to custom customer, right? So, they've modified this quite a bit. They've added some special instructions things as well, which is a reasonable uh schema changes for this type of upgrade. They've added a new order item type with item name and quantity and customizations inside of it as well. So, they're going to uh update this. They supported the UI to take all these things to the fields and now they want to publish it out. And you can see that they're using that for their new source is going to produce messages of this type, right? So, they're happy with this. Everything's ready to go. Uh, those types of things. They're going to go ahead and deploy it. Uh, deploy it here at 545. So, hopefully I can I kept this in here. So, they're going to go ahead and deploy it. This is just a CLI script that's going to do a release upgrading everything there. Updating uh, you know, going to make it so it deploys in. They force it to do uh, get deployed here. walking through that scenario, right? So everything is fine, right? Go ahead and deploy this thing and then they'll I'll show you I'll start get to the MCP stuff first over this works. So this is going to compile everything, upload everything. Again, this is all through a script. So nothing MCPish here yet. Deploying it is me. That's great. Now I'm going to go into plaud and I'm going to ask it. This is where the first thing we do and say, "Okay, I've deployed it. Can you check the at the status of this food order service connector I just deployed? Show me the first five messages I want to look at to make sure it's following the new schema I've changed here as well. You can see it went and checked the number of instances running. " It says, "Hey, it's running. Everything is working. The status is ready here. " Yep. It's going to hear. Okay, no problem. It recovers here in a second. Hopefully, it's using the MCP server. Just trying to get some information. Uh red redo the lookup. There we go. So it recovered it. It was going to the wrong schema at first, but here's what it's found. Right. The source connector and this is the MCP server itself. We just use m the MCP server from streaming native can now validate for the mobile team that their application got is running. Uh everything is you know connectors running but shows received yet. So we can verify everything is running. Let's go ahead and uh there we go. Nothing but live demos. Everything fails. So, we have it MCP server diagnos on the fly. Um can you check this again? Everything's got deployed and running. Everything's up there. Let's see if it's finally generating some uh messages. I want to see some messages coming in from here. There we go. Okay. So, it started to process some messages now. It's sending everything out. Uh everything is working to go. There's starting to be some backlog on the messengers going on here. Right. So the source it's notice that the source is actually producing. Uh it's already identified that there's uh no active consumers and the connector has been restarted. So good to go. Right. So it walks away. It says hey everything's running. I'm producing messages. Check the messages. Uh show me the messages again. So from its perspective, it's a successful deployment, right? They've deployed the application. Uh it's running. It's generating messages. Hopefully, it's going to generate some messages here in more time. It's why you never do a live demo. No matter how many times you test it, it always fails right when you're on screen. So there we go. It's thinking about it some more. It's going to consume everything. It's going to revert to SN cuddle because MCP server is having some issues connecting to the broker, but that's okay. That's a good fallback. Uh, now we're going to peak some messages. See what's going on. There we go. Now it's starting a consumer. It's getting some messages coming in and it should dialize those from Avro for you automatically. Right. And then you should be good to go. All right. While this thing is deploying, get some messages here. There we go. So, you can see that it's now using the new food order 2. You're producing some messages, some things, people are ordering some different things. uh
Segment 6 (25:00 - 30:00)
adding special instructions multiple items so it's aligned with that uh so you know they go home everything is fine uh right there's no problem there so problem is what they've not realized is that this v2 schema that they push is a breaking change relative to v1 when these first v2 messages arrive uh on the consumer and try to des serialize them it's going to fail with a serialization exception right the function's going to crash It's going to try and restart and it's going to crash again enter a crash loop back off death spiral, right? It's not going to be able to restart for multiple reasons, right? So the So what you're going to see in real life is you're going to be sitting there at the soccer field or whatever on your way home and get a Slack notification from the operations team saying, "Hey, orders have completely stopped. What's going on with your function? We're at a dinner rush. Customers are complaining. We're not able to fulfill orders. our business has come to a complete halt. Uh what we're going to walk you through is because you theoretically you could do this because Claude has a desktop application, you could diagnose it from while you're there at your kid's soccer game uh or driving home or sitting on the subway, whatever. And type in and you know ask it a prompt like what's going on? My order processor is here. Backlog's growing. What's going on? Uh and it will use MCP server that you plugged in and basically say hey this is the problem, right? And you can get into how do I fix it? uh from that code process. Right? So it's all one very clean process and we'll go through this uh there. Right? So let's go ahead and go over to the console now. And you're going to you know you get back and you'd say what's going on with my order processor function. Right now you've diagnosed the problem. You could ask it here. You could theoretically do this on your phone app as well. Again, I'm I can't demo on a webinar a phone app interface, but it's clawed, so you can ask it the same things, right? Uh it's basically telling you what's going on. It's crash loop as I started. Uh everything is crashing. It's restarted multiple times. It's even a diagnosis successfully that this is a schema mismatch, right? And it's pinned to that. So, it's nothing that it's it's the backlog is growing and this is why everything is going down, right? So, it's going to tell you that's it. you know, everything is going on. Uh, this failure note is generally hard to understand. So, you just say, okay, I want to next thing you tell Claude is, you know, hey, what's going on here, right? You want to say, I want to fix this function. It's already said, hey, function for you using this the schema. Uh, go ahead and make the fix, right? So, it's going to go ahead read your code. Because I'm in the directory with the codebase, it can find that new version. and says, "Hey, I found this, you know, v2 version here. Now, I have everything I need. " And it's going to make a the code is going to make the ch uh cloud recommended changes for you. Uh everything you would basically do yourself, right? It's going to say I've updated the new schema. I've done I update the message signature for the API. I do some changes now to iterate through the items instead of take the one item in itself. Update my output status a little bit as well. I change the call to get customer from customer name to get customer uh remove all those items as well. So it basically reads your code well and says I know what to do. You want me to make this changes to the function? You say yep go ahead and make that for me if you don't mind and it's going to go ahead and do that. Next steps to deploy it. Right. So it's told you what it's going to do and you say yep could you please upload this function. So this is where it has to use the SNL cuddle tool uh to do that but it will do the build locally right Maven clean package build everything build the new artifacts uh first and then it's going to like it says upload the new jar file with SN cuddle update the function via MCP and that'll be it. So it's going to go ahead and do this for you automatically. Again you can be standing somewhere say with your laptop on a on the subway getting all this stuff done while it does all this information for you. very quickly gives you successful status updates. Yep, I built the artifact. No problem. I was able to upload it successfully. I'm going to reenable the schema to auto update so I can uh get around the lock schema. Uh then update the function all these sorts of things here. Come on. That's all right. It's going to adjust itself here. It's going to update it. Gonna set auto act to true. So we need that. There we go. And now it's deployed it, but it's not started running yet. It's giving you feedback on the cluster's uh created state. It's spinning up.
Segment 7 (30:00 - 35:00)
Patience. It's going to wait patiently for it to come up. There we go. And there it was. Now it's verified that it is up and running again. And it gives you the function is healthy. It's already consumed all the backlog messages. There's no errors. Schema fits is working. So it's done. The backlog is drained and the pipe load is fixing again. Right? So uh with using claude and two tools MCP and cuddle uh you were able to basically diagnose the problem fix the code for you and redeploy it uh with a very short uh amount of time right so fantastic go back here look at this function again can now see everything is running it's reprocessing things you can see where it dropped off for a second and now it's recovered you can go back in the logs and see we're back to normal There might have been an error buried. There's one error message in here somewhere when it's had the schema change, but it you can see how hard it would be to diagnose this in the UI. Right? There it was. Right? Of all these messages, this was the one error that was a breaking change. So, this is very this is the proverbial need in the haststack. MCP server helps you find that very quickly. Uh it doesn't repeat. It does one time error. So, it's very uh clever and it recovered. So, that's great. So you went from this scenario, right, where you were operations got alerted, everything has stopped, you're losing revenue, it's quantifiable, nothing has changed, you don't know the root cause to, you know, and having everything running to things crashing uh and you don't know why to go basically having with one prompt I was able to identify the entire root cause, right? That's how powerful the MCP server is powered with claude. It went and queried the function status. It got the top topic stats and realized, hey, something is down. It saw message backlog was climbing. It validated. It went to the schema registry to figure out the schema incompatibility issues. It found the schema difference. Uh, you know, which was required. It was a breaking change. It was able to successfully navigate the namespace policies to allow it to deploy a function because you had to change that setting. If you try to do it manually without changing that setting, it would fail. There'd be errors with that. Um and then it was able to confirm that the root cause uh very easily uh and then was able to fix it. Right? So MCP identified the problem, made all the edits for you, build the built the code for you, redeployed it, and everything works. And now be going from a backlog to everything working. Uh, and all you had to do is MCP server SNL cuddle uh, and cloud code and skip all the gripping of the logs, checking your dashboards, calling the mobile app team to figure out who made a change and where, who, you know, who broke what, uh, to having everything up and working. So, you know, that's it. One prompt to diagnose, uh, one session to fix. Hopefully, uh, thanks everyone. Hopefully, uh, everybody enjoyed that. And if there I'm happy to take any questions uh in the chat before I hand it back over to Kund to wrap up the rest. — Absolutely. Thanks David. This was great. You know this is amazing. Uh actually if you don't mind uh did you show that section where the MCP server is in our cloud console? I'm not sure if you did. Uh it'll be nice for them to just — Yeah, — if you have it handy otherwise it's okay. — I believe it's here. Right. And so you enable the MCP server. That's great. Turn it on. You go to your organization. — You might have to share it again. Uh David, why not? — I didn't hit the share. Okay. Sorry about that. Right. So you go to your organization, you go to settings and there should be MCP server available right here. And then you can enable it and I've enabled it for this instance. And what I particularly did was uh using API key. So I use a command like this. Hopefully you guys can see. Maybe I can zoom in a little bit that you basically add one string here to add it as a server to your key with the API key. Right? And so when you go back here I was able to see right if you go to your MCP cloud list that it is there. Get this. I don't know if this is showing that it's actually this is the same once I configured it with this instance. It's connected. It's authenticated. So MCP it has the tools already ready and I had the context set to that same instance and cluster ID and that's it. So
Segment 8 (35:00 - 40:00)
everything's all the tools are pointed at the same cluster. Cloud can go to work and fix it and make those changes dynamically in your cluster real time. So is that — perfect? Sounds great. I think uh that's helpful. Thank you, David. This is uh phenomenal. Okay, I'm going to share my screen back. — Okay, we have 10 more minutes. We'll jump right quickly into it. Um I'm going to if you don't mind again, everyone, I'm just going to do a quick poll. It's just the last poll. I promise uh for today because I'm very curious about your use cases which you use. I'm going to launch that poll. Give me one moment. There you go. Hopefully, you see the poll now. We'd love to know what are some of your um primary use cases. I'm sure we all have many but anything you can share with helpful. — There's some questions. — Yeah, absolutely. People are more convinced of production troubleshooting now, David, after seeing your, you know, cool demo. — All right, that's great. I'll answer the questions in the Slack while you keep going with the presentation. Uh Kundon, uh I'll try to — Absolutely. — Yep. — So I'm going to um Yeah. So I think I'm just going to give a few more seconds and I'll share this poll. Last time I forgot to share the poll. So interesting to see troubleshooting use cases popular uh creating topics and connected also. Okay. Scaling and performance. Cool. So that's great. Thank you. I will end the poll now and I'm going to share so you all can see. Okay. So, I'm going to um continue to u go to the rest of the section. I'll keep this uh on for a bit. All right. So, let's continue. Uh and I think David, I hope you can see my slides. So, uh I'm going to quickly go over some more uh details here. So I think you saw how David covered actually some of the troubleshooting use case. There are many other use cases for which I think you know we can use the stream native MCP server um for diagnostics or operational visibility and others. So quite a few uh and then I would love to as I'm trying to understand what your use cases are and then I can see a lot of you are you know using it for the top three use cases here as well. So I'm going to skip this uh and jump into some of the uh other things we wanted to cover today very quickly. So I want to talk a little bit now I think we spoke about MCB server which is a very specific topic. I want to give you some visibility into our vision behind like you know some of the agentic AI capabilities we are building at stream native right. So um our vision here at stream native is to really embrace AI in you know ways where we take meaningful steps to do things in the context of you know the problems we are solving in the data space right how do we make uh productivity and security and governance a priority for our users and how do we give them tools so they can you know leverage them to you know bring in their UI use cases and implement right so with that thought basically we took the first step which is MCP which is the industry standard Now it's available at both as local and remote MCP server uh for people to kind of try. Then I think speaking about agents itself like back in data streaming summit last year I think for the folks who attended uh if not I think you know I'm happy to kind of share the the the keynote details uh later on. Um we announced Orca engine which is basically our engine for running agents. Uh so I think our goal with that is to provide a platform for users to bring in their own agents written in either OpenAI kit or you know Google ADK kit uh and deploy them within stream native cloud. So it has the context of your platform right and then you can define the goal of what the agent should be doing. So the platform which gives you that ability governance is a top thing in our mind. We've always had governance capabilities across different parts of stream cloud, but we want to make sure that this is really visible and front and center as we roll out more of these AI capabilities. So our users have that confidence that they know um that they can track who's doing what and uh they can control like you know how these agents are actually behaving and um acting. So uh so governance definitely you can expect to see a lot of important capabilities around you know agentic AI uh workspaces will be a new concept we're working on which will basically bring together all the related artifacts AI related artifacts in one place whether it's agents functions or connectors now connectors will provide context uh to the agents so they can provide connectivity context and be able to connect to the different data sources and destinations so imagine a workspace role based access for different teams
Segment 9 (40:00 - 45:00)
you can come together and collaborate to sort of uh create your artifacts in one place uh to roll out your AI based uh like you know use cases. Open source is always in our mind but as we work on it we keep an eye on what are the areas we can sort of open source so we've been you know contribute to the community we open sourced the MCP server which is available last year um but we are keeping an eye on what else we can do this year in the AI side I'll keep you all posted on how this goes lastly I think our goal is also while we provide tools for our users to kind of build AI agents and bring them into stream native cloud uh and govern them monitor them uh we also want to explore what are the common use cases for which we can provide some built-in agents which can do certain tasks for you right so we'll keep you posted on how that goes so that's the vision we'll share more as we go along little bit on like the timeline without giving the dates uh we started last year with local MCP server available now for you to download it's open source you can modify it uh and connect it to your stream native uh cloud uh and then essentially this is the private preview we announced last year um for Oraca agent in June. Today we are announcing this public preview. Uh and now uh I'll also talk a little bit about our listing of MCP server on databicks marketplace. Uh which opens up a bunch of new use cases. This is only relevant for the folks who are using databicks. Um this is where we're going to roll out public preview for archive agent and the workspace. So you can when we say public preview, this is where you can come into the stream native cloud in a self-service fashion, enable a preview and basically um you know get your hands on it. You don't have to file a support ticket to request for enabling a certain feature. You can do it yourself and try it. AI mode is another thing which we are exploring uh which will be a built-in sort of uh chat style interface which allows you to kind of interact with you know the cloud resources. uh taking this remote MCP server to the next level where you're building a UI. So this is sort of an MCP compatible client built into your cloud console which lets you kind of do the interaction that way. You don't have to set up anything at all. Right. So that's a quick visibility into uh some of the capabilities you're planning this year and we'll keep you posted. Uh I'll soon be uh we'll soon make a announcement on like this uh some of the timelines for the public preview for agents and workspace. that's coming up. Just stay tuned. So I will skip some of these capabilities. This is uh what uh David showed. This is essentially the section of stream native cloud where you can go and configure the FCP server. You can configure which tools are allowed or not allowed. what sort of mode it's operating on read only mode or write mode and then um I think you know you saw the connection section where you get the endpoint and the connectivity details right um we support oath and service uh basically API keybased authentication so with any comp MCP compatible client you can use this as a way to you know plug that in as a authentication mechanism to talk to stream data cloud um these are a set of tools you can find this in our documentation as well for remote MCP server. Uh various tools which are exposed um for an NCPD compatible client to kind of you know interact with and invoke right. Okay. So I want to talk a little bit quickly about we have a couple of minutes left about the databicks uh MCP marketplace. So we work very closely with databicks on different use cases. uh this is a particular scenario where you know databicks is uh enabling all the data providers the term they use is data provider for any vendor who wants to enable access to their data platform in our case it's stream native cloud uh via a MCP server so they have a marketplace for MCP server we will be list getting listed soon uh our review is approved so you can expect to find uh our listing within the marketplace within the within their console and essentially This is going to allow you to um connect to the stream native to MCP server uh just like the way David connected to from his cloud client. You can connect it from here and then uh essentially let me walk you through some details. Before I go into that one quick comment again databicks is doing this so that you know it is uh imagine a user who is spending more time inside databicks workspace. This is the persona of databicks who wants to kind of bring you know more data into the databicks AI playground um to have more context to how you can interact with these multiple sources of data right and so we become the data provider for sort of the users inside databicks and u like stream cloud and all the active topics and the resources uh we have essentially um these user personas can interact from AI playground and access those you know
Segment 10 (45:00 - 50:00)
resources and tools. So that's the thought. I'll share more on this. I'm going to be sharing a video and a blog coming up soon. But to walk you through some details. Essentially in data bricks you can um at a catalog level create a connection of type HTTP and you know enter all your details of your uh the stream native MCP server which is enabled at the cluster level. You can expect to find like you know all the details of your endpoints. Once the connection is created you grant some permissions at the databix level. Um and then you go to the AI playground. I'm just moving fast on this just to wrap it up quickly. Um essentially in the playground you can do many things but the one thing right now in the context of what we speaking about is MCP right. So here is where you will basically click on you'll have to select a couple of things. First thing is this uh endpoint which is essentially selecting a model u for your MCP compatible client to talk uh essentially your AI playground to interact with and then MCP server section where you can the connection which you created will show up here right now with these two things the model you selected and the connection to your remote server you can basically you know go and uh perform all your prompts here I'm just showing an example of a basic prompt but essentially The agents can also use the same interaction. I'll cover that separately as a followup. But I think this is pretty much the flow. You can register a stream native MCP server as an external MCP server here. And soon in fact you'll have that listed in the marketplace. So it makes it easy for you to uh enter everything there. Uh this is just showing the interaction you know from databix to like you know talking to stream cloud. Uh a very basic use case. So that's a little bit about dataix. Uh I'll share more details on that through blog and other things. So for the folks who are new to stream data cloud, you can sign up for a trial. You can create a cluster. You can get your hands on the remote MCP server by enabling it at the cluster level and then you know explore it and share your feedback with us. I'm rushing a little bit because we are two minutes over. Uh we have um uh a trial. You can scan this for the folks who have not tried it before. Uh you get $200 of credit and you can also schedule a demo if you want to talk to one of us. I'm going to pause here for a second in case if you want to scan this. U there are also a couple of blogs related to MCP server I'm sharing as resources. Feel free to scan um and then u you know go through that. Please reach out to us if you have any questions. Okay, so that's pretty much what we want to cover. We went two minutes over. Um, so I'm u sorry about that. Let me just stop sharing here to see if there are any questions. Looks like uh David, you have covered most of the questions. Let me stop sharing the poll. Okay, I think we've got like one here in this Q& A. David, uh perhaps maybe you can share some insight on it. The question is like what kind of other insight uh do you have about like what customers are uh seeing other organizations are seeing in terms of the use cases which are popular with MCP how are they trying to like explore that like one good use case is what you showed today anything else you are hearing David — uh we covered it on that one slide there's lots of capabilities MCP tooling gives you know full admin access so a lot of things around just monitoring broker utiliz ization, you know, alerting. I see a lot of the operation teams who are trying to keep the servers running use it as a tool because they're learning uh pulsar, right? There's 700 plus metrics. It's hard to be an expert in every single one. It's hard to understand what's going on. So asking it to say hey why you know which broker's heavy uh which topic is on it or why is this topic having a backlog or how many you know connected consumers I have things like that to do real quick diagnostics I find to be something the most popular most meaningful uh you know direct use cases other ones are using it again for more complex coding uh operations and automation of things like that so it's really uh right now having a lot of uptake in the u because and because MC CP server is an admin focused tool in the admin and DevOps uh communities and those types of tasks. — Absolutely. Thanks, David. There's a last one. I'll take this one. This like just the timelines around the GA for MCP server. So, we're thinking right now Q2 uh end of Q2 tentatively. We should be able to kind of uh make a GA, but right now at least you should be able to get your hands on it. um it's not stopping you from it but once you go GA of course you'll have more confidence about you know trying this with your production I understand that um okay so I think that's uh all we had
Segment 11 (50:00 - 50:00)
looks like I don't see any other questions uh David thanks for addressing the questions here you already handled that so um thanks again for joining today we will share uh you'll be getting a recording um of this webcast via the email but uh you know if there are any questions you all have, please reach out to David and I. We'd love to hear what sort of use cases are um are off top of your mind. We shared some of that, but if you want to chat more with us, we are available. U feel free to reach out to us. — Sounds good. With that, I'm going to end this webcast. Thank you, David, so much. — Thank you. Thank you, everyone. Thank you. Have a good day.