Background:

I modified redis-benchmark using producer-consumer model to implement QPS control, and use it to test the server performance under different workload. After some experiments, I found that the average latency and p99 latency decrease as the QPS/throughput increase, which is confusing. Figure below shows the experiment data, the latency and throughput data is provided by the benchmark.

Redis latency decreases as throughput increases when using redis-benchmark

I've checked the benmark code and didn't find bugs (only modified write handler and created a producer thread), so I'm wondering if it is some mechanism in redis server that causes such phenomena? Some details: 1. bgsave and aof is disabled, maxmemory is set to 32G 2. the benchmark command: ./redis-benchmark -t set -n 300000 -r 500000 -d 1024 -c 50 --qps <QPS>

Comment From: filipecosta90

Hi there @liyaoxuan,

I modified redis-benchmark using producer-consumer model to implement QPS control, and use it to test the server performance under different workload.

This is great! Can you open a PR so we can iterate over that feature?

After some experiments, I found that the average latency and p99 latency decrease as the QPS/throughput increase, which is confusing

Are we keeping a constant number of open connections to the DB? Or is that changing during the benchmark depending on the required load?

I believe if you can share the code we will be able to understand/replicate this further.

Comment From: liyaoxuan

Hi @filipecosta90, sorry for late reply

Are we keeping a constant number of open connections to the DB? Or is that changing during the benchmark depending on the required load?

Yes, we keep a constant number of open connections to the server. After some discussion with my friends, we found that the problem is the computation logic of latency. In the previous version, we didn't take into account the queue time of some requests, which leads to incorrect latency computation.

Can you open a PR so we can iterate over that feature?

I have just opened a PR and looking forward to further discussion : )