# Native Kafka Service from StreamNative

## Метаданные

- **Канал:** StreamNative
- **YouTube:** https://www.youtube.com/watch?v=23vQhKMcIrU
- **Дата:** 08.04.2026
- **Длительность:** 6:09
- **Просмотры:** 53

## Описание

Explore the core features of StreamNative's native Kafka service in this step-by-step demonstration. This video walks through the entire lifecycle of a Kafka environment on StreamNative Cloud, from initial cluster provisioning to real-time data governance.

What You’ll Learn:
* Cluster Provisioning: How to create a dedicated Kafka cluster on StreamNative Cloud. We compare Latency Optimized profiles (disk-based for millisecond-level speeds) versus Cost Optimized profiles (object storage-based for better efficiency).
* Secure Access Management: Setting up a service account and generating API keys to secure your programmatic access.
* Client Integration: A look at the Java client setup, including how to handle dependencies via Maven or Gradle and authenticating with broker and schema registry URLs.
* Real-Time Data Production: Watch a live producer application stream an e-commerce dataset into multiple Kafka topics.
* Data Exploration & Governance: Use the StreamNative dashboard to monitor throughput, inspect message payloads, and manage schemas through the integrated Kafka schema registry.

Whether you are optimizing for ultra-low latency or managing complex data schemas, see how StreamNative provides a fully managed, high-performance environment for your Kafka workloads.

## Содержание

### [0:00](https://www.youtube.com/watch?v=23vQhKMcIrU) Segment 1 (00:00 - 05:00)

Let's walk through the key steps to explore StreamNative's Kafka service. First, we'll create a Kafka cluster to get a fully managed environment up and running. Second, we'll create a service account and API key for secure access. Third, we'll build a Kafka client and produce data to see real-time streaming in action. Finally, we'll explore the topic data and schema registry to understand schema management and governance. So, let's get started with the first step. To create a Kafka cluster, log in to StreamNative Cloud and click on create instance. I'm going to choose dedicated option for this demo. Enter the instance name. Select the cloud provider and choose the native Kafka service by selecting the Kafka cluster. Enter the cluster name, choose a region, and for cluster profile you have two choices. Latency optimized profile allows you to optimize your cluster for ultra-low latencies like in milliseconds, and it is based on disk storage. And cost optimized profile allows you to optimize your cluster for costs. It is based on object storage. So, for this uh demo, I'm just going to use the latency optimized clusters. Select the availability zone and now size the cluster. I'm going to leave the default option here and finish. Once the cluster is successfully provisioned, we can navigate to the home of the instance we created. This is where you can find all Pulsar and Kafka resources. We're just going to go to the Kafka cluster we just created. The dashboard looks empty right now because we have not populated anything in this cluster yet. Now, let's navigate to overview section, which is important. These endpoints will be helpful for you to connect to this cluster from your clients. Now, we have the service endpoint, which is broker URL, and we have the schema registry URL as well. And that's the region of your cluster. To create a programmatic access for your Kafka cluster, you need a service account. So, to create a service account, you can log in to StreamNative Cloud and go to settings within your organization and create a new service account. Just give it a name and create. Once your service account is created, you can add a role assign a role to this account. In this case, I'm going to assign a role called instance owner, which allows you to manage all the resources under an instance. The next thing we're going to do is create an API key under this service account. So, I'm just going to go find my service account and create an API key. We'll be using this API key while connecting to this cluster from a client. Make sure that you select the right instance while creating this API key. Once the API key is created, save it in a file. We'll be using that in the next section. To create a Kafka client, navigate to the Kafka clients page. Choose your client library. In this case, it's Java. Select your service account. Optionally, you can create an API key or you can skip it since we've already created it. The service endpoint for your cluster will automatically populate. Enable Kafka schema registry, and the permissions which are required for your schema registry is already granted via the instance owner role, so you should be good. The next step is going to show you the dependencies required to set up this client via Maven or Gradle. And here you can see the producer code. This is a sample code. It'll show your server, which is a broker URL, schema registry URL, and JWT token is basically the API key, which will be used as a password while actually authenticating with the Kafka cluster. One other important thing to highlight here is the username. So, this will be populated in the sample code as well. It'll be based on your service account which you created. So, this username along with the password, which is the API key, is what you will need to authenticate with your Kafka cluster. Here's my producer code, which is going to populate data into these six different topics. This is an e-commerce data set, which I'll be writing to the Kafka cluster. Let me run this producer code. Now, this is writing data to the Kafka cluster. So, let's go explore this data now. To explore the data which is populated in the cluster, let's go back to the instance. This will be the native Kafka demo instance. You can see the Kafka cluster here. Let's click on that. Now, you can see that some data has started showing up in the dashboard because we have populated some data into these topics. We These are the topics which are created from my producer application. Uh this is the e-commerce data set. You can see this is the topics dashboard, which is really showing the throughput and the storage size, which is growing as the data is coming into the topic.

### [5:00](https://www.youtube.com/watch?v=23vQhKMcIrU&t=300s) Segment 2 (05:00 - 06:00)

There is a uh schema associated with this topic. You can see all the compatibility modes from the schema registry showing up here. You can evolve the schema if you need to. Let's look at some of the messages which are populated into this topic. This is the payload of the message. If you want to produce a sample message into the topic, you can use the built-in tool to populate data based on the schema of the topic. You can configure the retention period or retention size or the cleanup policy on the topic level. Going back to the schema, here you can see all the details. This is the schema stored in schema registry. Uh the Kafka schema registry has all the schemas associated with these topics stored there. And this is where you can see all the consumer groups. So, that's pretty much what I wanted to cover in this video. This is the native Kafka service by StreamNative.

---
*Источник: https://ekstraktznaniy.ru/video/45984*