Redis vs Valkey Performance & Comparison (2026)

Redis vs Valkey Performance & Comparison (2026)

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

In this video, we'll compare Redis and Valkey, and I'll give you a little bit of the history of how Valkey was born. In the second part, we'll run a benchmark between these two. We'll measure round-trip latency for SET and GET operations, the number of requests per second each can handle, as well as basic metrics such as CPU usage, memory usage, and network usage. And I’ll primarily try to reproduce this test using a large EC2 instance as described in the blog post and using multithreading. First of all, let’s talk about Valkey and how it is related to Redis. You see, Redis is a very mature project — its first version was released in 2009 — and it became extremely popular. It was not only used as a cache but also gained a ton of other features over time, like pub/sub, streams, and many more. Developers absolutely loved it, and Redis ended up being used in pretty much every project. Redis is famously single-threaded, so to scale it you have to scale horizontally by adding more shards, which makes it significantly harder and more time-consuming to maintain. With the rise of public clouds, managed services started to appear that let you offload all the operational work of running databases and caches to the cloud providers for a fee. Now you can just click a few buttons and get a fully managed, multi-sharded cluster that auto-scales based on your load, and if something goes wrong, the cloud provider handles recovery for you. That’s really nice for developers and startups who don’t want to maintain Redis clusters themselves and are happy to pay the cloud providers for it. Redis (the company) didn’t like that at all. They already had their own Redis Cloud offering the same managed service, but the big cloud providers had major advantages: you’re already on their infrastructure, and latency is often lower when the managed Redis runs in the same cloud. So Redis really wanted to get paid by those cloud providers. Starting with version 7. 4, they changed the license specifically to force the major cloud providers to pay them if they wanted to keep offering Redis as a managed service. Almost immediately after that decision, Redis was forked — with support from AWS, Google Cloud, Oracle, and others — and the fork became Valkey, which kept the original open source licence. That allowed cloud providers to continue offering a fully open-source Redis-compatible service for free. That’s how Valkey was born. Later, Redis realized the license change had backfired pretty badly, so starting with Redis 8. 0 they dual-licensed it again, effectively bringing back a proper open-source license alongside their source-available one. So now we have two projects moving forward separately: Redis and Valkey, each with their own roadmaps and feature sets. One of the most requested features for years has been proper multi-threading, and since the split both projects have implemented their own versions of multi-threaded I/O independently — and that’s exactly what I’m going to test today. Redis was originally developed as a single-threaded event loop, which means that if you want to scale, you can only do it by running more Redis instances. You can run them on different servers, or you can actually scale and form a Redis cluster on a multi-core server—and actually, Redis is working on making it easier to create Redis clusters on a single server. Now, another approach is to scale vertically by just running Redis on a server with more CPU cores, but the problem is that a single thread can only utilize one CPU core at a time, so the other cores will be in an idle state and you won’t get any performance benefit from them. So here, actually, Redis and Valkey decided to add more threads on top of the main thread to handle I/O operations like accepting requests, parsing commands, working with memory, etc. The main thread is still a bottleneck, but a lot of work is now offloaded to the other threads. And the main point of using only a single main thread is to avoid race conditions and not slow down Redis by having to synchronize multiple threads—

Segment 2 (05:00 - 08:00)

so it’s a trade-off between performance and convenience to scale vertically. If I were to test just a single-threaded (or even clustered) Redis and Valkey, it would be pretty much the same, as it’s the same source code before the fork. What’s actually interesting to test is how multithreading is different in Redis and Valkey, since they have implemented them separately. Alright, let’s go ahead and run the test. I use AWS to run most of my benchmarks to make it more realistic. Here I have two nodes each for Redis and Valkey, as well as 25 nodes to run my clients on a Kubernetes cluster. As you can see, I’m using the latest Valkey version (9. 0) and the latest Redis version as well. I’ve provided the same configuration options for both. Alright, I’ll start with 20,000 requests per second and slowly increase the load until neither of them can handle any more requests. At the beginning, you can see that Valkey has slightly lower latency and uses less CPU, which makes it more efficient in the early stages. Let me run this test for one more minute, and then we’ll go over each graph one by one. Alright, let’s start with latency. Valkey’s latency is consistently lower than Redis, but to be honest the difference is very small. Next, requests per second — Valkey showed slightly better results here too. Next is CPU usage, where we see the biggest difference. Even with identical configuration, the multithreading implementation differs. In this setup with only 9 IO threads and 1 main thread, it’s almost impossible to fully utilize a 16-core server, but that was intentional to reproduce the original benchmark conditions. Next, memory usage. And finally, network usage. Well, I couldn’t reach 1 million requests per second like in the blog post because I wanted a more realistic test: I mixed SET and GET commands and measured real round-trip latency, whereas the blog used a single server with only SET commands. So it’s not apples-to-apples, but I hope this still gives you some insight.

Другие видео автора — Anton Putra

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник