Redis vs Valkey Performance [NEW] (8.6 vs 9.0)

Redis vs Valkey Performance [NEW] (8.6 vs 9.0)

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI

Оглавление (1 сегментов)

Segment 1 (00:00 - 03:00)

In this video, I'll compare the just-released Redis 8. 6, which brings major performance improvements, against Valkey. We’ll be measuring round-trip latency, throughput, and both CPU and memory usage. Since my last benchmark, Redis has made significant performance improvements in its latest version, so I decided it was time to rerun the test. You can find a link to the full blog post, as well as to the test source code, in the video description. My goal for this test is to reproduce the benchmark published by Valkey, where they claimed to hit 1 million requests per second with a 512-byte payload. To test performance, I’m using simple SET and GET commands with a 1-second TTL, measuring the total round-trip duration. I’m using the same 512-byte payload size used in the original Valkey blog post. For the infrastructure, I’ve provisioned AWS Graviton-based 4xlarge instances for both databases, and I use exactly the same 9 IO threads for both. To generate the load, I’m using EKS and gradually increasing the number of pods until both Redis and Valkey can’t handle any more requests. Let’s go ahead and run the test. Usually, it takes a couple of hours to run it, and I just compress it to a few minutes while I’m editing. The source code and instructions can be found in my public GitHub repo and are easily reproducible by anyone. On the left-hand side, we have latency—the lower, the better. Then, on the right-hand side, we have throughput—it’s vice versa: the more requests per second each instance can handle, the better. And on the bottom, we have CPU and memory usage. Alright, let me run this test for one more minute, and we’ll go over each graph one by one. First of all, we have throughput, and Redis definitely improved the performance as they promised in the blog post. Then we have latency, and in this case, it’s pretty much the same at the beginning and slightly higher at the end. I think it directly correlates with much lower CPU usage. Keep in mind I’m using a 4xlarge instance with 16 CPU cores. Based on my experience, the sweet spot between vertical and horizontal scaling would be using a 2xlarge with an 8 CPU instance and scaling to form a cluster. I think in this case, Redis will be much more efficient compared to Valkey. If you’re not using ElastiCache in AWS—which is a managed service based on Valkey—and are hosting Redis yourself, I think you’ll get better performance for less money, which is ultimately what we want.

Другие видео автора — Anton Putra

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник