When we used redis-benchmark to test redis performance, we found two anomalies. One is that when use the - C option to change the number of connections and the other options are the same, the latency of two connections is lower than that of one connection. The other is that when the redis-benchmark maintains a connection with the redis server and changes the QPS, we find that when the CPU utilization of the redis server is less than 60%, the end-to-end latency and server processing latency decrease with the increase of QPS.
We did the above experiment on a high-end server,which has 40 high-performance CPUs(Intel (R) Xeon (R) CPU e5-2630 V4 @ 2.20GHz) and 256GB of memory. When do experiments, we bind redis benchmark and redis server to different CPUs. In order to measure the processing latency of the redis server, we inserted the ustime() function at the beginning of the readqueryfromclient() and at the end of the writetoclient() to measure the processing delay of each request. Our experimental data are as follows.
The following picture shows the test data of one connection and two connections. client_ avg is the latency data given by redis benchmark, server_ avg is the latency data we tested on the server side. We ran redis-benchmark 15 times in both cases. It can be seen that when redis-benchmark initiates a connection, the end-to-end latency and redis server processing latency are higher than those of two connections initiated by redis-benchmark.
We need to modify redis benchmark to implement simple QPS control. To achieve this goal, we add usleep to the writehandler in redis-benchmark. c file, and we can be sure that this does not affect the latency calculation of redis benchmark. The following figure illustrates where we added usleep.
However, when we use the modified redis benchmark to test the redis server, we find that the latency decreases with the increase of QPS while other options remain unchanged. We increase QPS by decreasing sleep time. The results are shown in the figure below
:
As sleep time decreases, QPS increases, throughput increases, and server CPU utilization increases. However, the end-to-end latency (client_avg) and server processing latency (server_avg) are reduced. That's very strange.
We repeated the above experiments on several different machines and got the same conclusion. So is this a bug? Or is redis designed like this?
Comment From: filipecosta90
Hi there @csbo98 let me double check if I can replicate this on a local lab ( also hpc one ). If so, I also want to trace externally both processes so we’re sure that indeed we have a problem. will need some time on this
Comment From: csbo98
Hi there @csbo98 let me double check if I can replicate this on a local lab ( also hpc one ). If so, I also want to trace externally both processes so we’re sure that indeed we have a problem. will need some time on this
Hi, @filipecosta90 . Can this abnormal phenomenon be replicated?