Benchmarks on Performance and Scalability
Last updated: 4th November, 2024
Benchmarks play a pivotal role in the world of databases. They provide measurable insights into how a system performs under different conditions, helping developers optimize configurations, identify bottlenecks, and predict behavior under stress.
For DiceDB, a high-performance in-memory key-value store, benchmarking is crucial to demonstrate its scalability, resilience, and efficiency. Here’s our latest benchmark that will demonstrate our numbers, compare them with Redis, and more importantly provide the exact steps to be taken to reproduce the numbers.
Redis, created over a decade ago, relies on a basic single-threaded architecture, which can make scaling deployments a challenging and time-consuming task for developers. In contrast, DiceDB offers a seamless alternative to Redis. As a modern in-memory database, DiceDB builds on the strengths of earlier systems but enhances both performance and reliability.
With its multi-threaded design, DiceDB supports efficient vertical scaling, allowing you to sidestep the complexities of horizontal scaling and cluster management, all while achieving the scalability your application requires. Because DiceDB is a drop-in replacement of Redis, we can run the standard Memtier benchmark to compare the two.
Memtier Benchmark
Memtier is a powerful, easy-to-use benchmarking tool specifically designed for key-value databases like Redis and Memcached, making it ideal for evaluating DiceDB. In this test, we used Memtier to generate a workload that mimics a real-world scenario, stressing DiceDB under a balanced read/write ratio.
Machine configuration
The benchmarks were run on AWS c5.12xlarge instance
which has 48 vCPUs, 96 GB memory, and 12 Gbps network bandwidth. The benchmarks were run on an Ubuntu machine and the latest commit was on the master
branch of DiceDB.
Infrastructure Setup
Take two AWS c5.12xlarge machines, one machine to run memtier while the other to run DiceDB.
DiceDB Benchmark
On the machine that is designated to run DiceDB, fire the following commands to build and run DiceDB.
Running the benchmark
On the machine that is designated to run memtier, fire the following commands to start the benchmark.
This command runs a benchmark where,
- we execute benchmark using 24 threads to distribute the workload
- each thread will set 50 clients and send requests concurrently
- the benchmark consists of a balanced mix of GET and SET operations (1:1 ratio)
- each request will involve 256 bytes of data, simulating small-to-medium-sized transactions
- a total of 100,000 requests will be executed, and the benchmark will run until all requests are completed.
Results Observed
After running the above command on DiceDB we got the following numbers.
The benchmark run tested DiceDB with 24 threads, 50 client connections per thread, and 100,000 requests per client which turns out to be 120,000,000 operations in total. Below is a breakdown of the key metrics and their significance:
- the peak throughput DiceDB can handle is 1,940,109 ops/sec
- the average throughput was 536,901 ops/sec, evenly split between GETs and SETs
- the average latency for the overall benchmark was 1.99 milliseconds, pretty fast. - p50 latency was 1.33 ms - p99 latency was 8.83 ms, indicating that 99% of requests were faster than this. - p99.9 latency reached 13.50 ms, showing how the highest-latency outliers behaved.
Redis Benchmark
Now, you can stop the DiceDB process setup Redis, and run the server on port 6379.
Running the benchmark
Results Observed
After running the above command on Redis we got the following numbers.
The benchmark run tested Redis with 24 threads, 50 client connections per thread, and 100,000 requests per client which turns out to be 120,000,000 operations in total. Below is a breakdown of the key metrics and their significance:
- the peak throughput Redis can handle is 202,925 ops/sec
- the average throughput was 192,485 ops/sec, evenly split between GETs and SETs
- the average latency for the overall benchmark was 1.99 milliseconds, pretty fast. - p50 latency was 5.69 ms - p99 latency was 11.26 ms, indicating that 99% of requests were faster than this. - p99.9 latency reached 14.01 ms, showing how the highest-latency outliers behaved.
DiceDB vs Redis
Metric | DiceDB | Redis |
---|---|---|
Peak Throughput (ops/sec) | 1,940,109 | 202,925 |
Average Throughput (ops/sec) | 536,901 | 192,485 |
Average Latency (ms) | 2.23 | 6.23 |
p50 Latency (ms) | 1.33 | 5.69 |
p99 Latency (ms) | 8.83 | 11.26 |
p99.9 Latency (ms) | 13.50 | 14.01 |
The benchmark results demonstrate that DiceDB significantly outperforms Redis in terms of throughput. DiceDB’s peak throughput is ~9 times higher than Redis, reaching 1,940,109 ops per second compared to Redis’ 202,925 ops/sec. Additionally, DiceDB’s average throughput is also much higher, clocking in at 536,901 ops/sec, while Redis achieves 192,485 ops/sec. This indicates that DiceDB can handle far more operations in a given period, making it a better choice for applications requiring high throughput, such as real-time or reactive systems.
In terms of latency, DiceDB also offers better average latency at 2.23 milliseconds. However, when analyzing the different latency percentiles, DiceDB performs better across the spectrum - average, p50, p99, and p99.9 latencies.
DiceDB vs Redis - number of cores
If we run the above benchmark while altering the number of cores of the underlying hardware, we can see how DiceDB is optimized for scalability. Redis exhibits stable throughput across core counts, suggesting a limit to its ability to leverage additional processing power. By contrast, DiceDB’s throughput increases as more cores are added, indicating that it dynamically scales with available hardware resources.
This difference showcases DiceDB’s design advantage in multi-core environments, where it effectively leverages all the underlying cores to offer maximum performance. This not only highlights DiceDB’s potential for high-throughput applications but also its efficiency in using hardware resources to reduce latency and improve responsiveness.
Overall, DiceDB offers superior throughput and generally lower latency, making it a more robust solution for high-performance, low-latency applications, especially in cases where extreme latency outliers are less critical.