Describe the bug
I have 1 GB of memory usage extra per second (!) when trying to connect with redis client to a redis-server which was started in a docker which is not running anymore.
To reproduce
- Start a
redis-server &from a docker which has forwarded the default redis port or has access via the docker run argument--host - Stop the docker
- Check if the connection is in CLOSE_WAIT state
lsof -i :6379 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME code 530802 sam 39u IPv4 45625157 0t0 TCP localhost:redis (LISTEN) code 530802 sam 44u IPv4 47196288 0t0 TCP localhost:redis->localhost:56540 (CLOSE_WAIT)
Expected behavior
CLI clients should definitely not require an infinite amount of memory. Furthermore, I expect it to fail or timeout if it cannot connect to a server on the redis port.
Additional information
I've used the following clients: - redis-cli version 7.2.5 - redis rust crate version 0.23: https://docs.rs/redis/latest/redis/
Comment From: sundb
do you mean the rust client consume more than 1gb even it disconnected?
Comment From: jaques-sam
Yes! It tries to connect to the redis over the port that is still in CLOSE_WAIT state.
Comment From: sundb
the reason should be that the rs client does not realize that the Reids in docker has been shut down , causing the connection to be in CLOSE_WAIT state.
you can use PING in the timer to periodically check if the server has been shut down.
Comment From: sundb
@jaques-sam any news? if no i'll mark it as close.
Comment From: jaques-sam
Uh, the remaining issue is imho a major bug and needs solving... How come an application can reserve such a tremendous amount of memory (1GB/s)?! This should be tackled. Closing this will simply hide the issue.
Comment From: sundb
@jaques-sam regarding this, you can make a issue for help in https://github.com/redis-rs/redis-rs.
Comment From: jaques-sam
Mmm, it's also happening with redis-cli, and probably with other redis clients as well.
Comment From: sundb
@jaques-sam can you give the reproduce steps by using redis-cli?
Comment From: jaques-sam
The production steps are as said in the first message:
- Add the forward port 6379 from a dev container in VSCode
-
Check if the connection is in CLOSE_WAIT state
lsof -i :6379 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME code 530802 sam 39u IPv4 45625157 0t0 TCP localhost:redis (LISTEN) code 530802 sam 44u IPv4 47196288 0t0 TCP localhost:redis->localhost:56540 (CLOSE_WAIT)
-
Just enter the command
redis-cli - Check total memory consumption being increased with (h)top by ~1GB/s
Comment From: sundb
@jaques-sam where do you see memory growing at 1gb per second? from 'ps'? please give the info.
Comment From: jaques-sam
I couldn't reproduce it myself anymore, so start trying it out again.... I remember I had to remove the port forwarding address from VSCode in a dev container to fix it:
This is the reason why the port is in CLOSE_WAIT state. This is even the case when the docker is still running. Sorry for the confusion.
It's strange I don't see MEM% being high, only my main memory is getting full:
After couple of seconds:
As you can see, redis-cli gives no output and seems blocked, memory increases but it's not sure where...
It's for sure redis-cli as that's the command that is running, and actually it proves this is the case when shutting it down, all those GBs in memory are released.
Comment From: sundb
@jaques-sam please try gdb -batch -ex "bt" -p pid to see what redis is doing now.
and try ps aux|grep redis-cli to see the memory usage of redis-cli.
Comment From: jaques-sam
ps aux | rg redis
sam 804823 0.0 0.0 20616 4148 pts/2 S+ 11:47 0:00 redis-cli
gdb -batch -ex "bt" -p 804823
This GDB supports auto-downloading debuginfo from the following URLs:
<https://debuginfod.fedoraproject.org/>
Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal]
Debuginfod has been disabled.
To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f8dbd72c5dd in recv () from /lib64/libc.so.6
#0 0x00007f8dbd72c5dd in recv () from /lib64/libc.so.6
#1 0x0000556a235c4b6d in redisNetRead ()
#2 0x0000556a235cbd1c in redisBufferRead ()
#3 0x0000556a235ccc21 in redisGetReply ()
#4 0x0000556a235ccde4 in redisCommand ()
#5 0x0000556a235a8e90 in cliInitHelp.lto_priv.0 ()
#6 0x0000556a235ae433 in repl.lto_priv ()
#7 0x0000556a2359de79 in main ()
[Inferior 1 (process 804823) detached]
Memory is not increasing here:
ps aux | rg redis
sam 804823 0.0 0.0 20616 4252 pts/2 S+ 11:50 0:00 redis-cli
ps aux | rg redis
sam 804823 0.0 0.0 20616 4252 pts/2 S+ 11:50 0:00 redis-cli
ps aux | rg redis
sam 804823 0.0 0.0 20616 4252 pts/2 S+ 11:51 0:00 redis-cli
ps aux | rg redis
sam 804823 0.0 0.0 20616 4252 pts/2 S+ 11:51 0:00 redis-cli
ps aux | rg redis
sam 804823 0.0 0.0 20616 4252 pts/2 S+ 11:51 0:00 redis-cli
Comment From: sundb
from your ouput we can see than redis-cli just consume a littlt memory.
it doesn't get stuck, but rather that it can't receive a reply(i don't know why it doesn't timeout, maybe a bug).
did you forget to turn off the forward port in the vscode, i suspect that it may cause the problem.
Comment From: jaques-sam
As said: - removing the forwarding port in VSCode fixes the problem, it dumps the GBs in my main memory - quiting redis-cli also dumps the GBs in my main memory
Because redis-cli is not increasing in memory in (h)top/ps, isn't that because memory is consumed in Kernel space?
Comment From: sundb
As said:
- removing the forwarding port in VSCode fixes the problem, it dumps the GBs in my main memory
- quiting redis-cli also dumps the GBs in my main memory
Because redis-cli is not increasing in memory in (h)top/ps, isn't that because memory is consumed in Kernel space?
@jaques-sam no, i guess it's caused by vscode.