Describe the bug
our user wrote a string to redis with almost 500MB length, and my redis's maxmemory configured to 1GB, but it always responded "OOM command not allowed when used memory > 'maxmemory'" to client.
To reproduce
- redis-cli flushall
- redis-cli config set maxmemory 1GB
- memtier_benchmark -s 127.0.0.1 -p 6379 -c 1 -t 1 --ratio=1:0 --data-size 512000000 --key-maximum=1
- redis-cli info memory keyspace
127.0.0.1:6379> info memory keyspace
# Memory
used_memory:537860680
used_memory_human:512.94M
used_memory_rss:492453888
used_memory_rss_human:469.64M
used_memory_peak:544027376
used_memory_peak_human:518.82M
used_memory_peak_perc:98.87%
used_memory_overhead:537739292
used_memory_startup:864576
used_memory_dataset:121388
used_memory_dataset_perc:0.02%
allocator_allocated:538548720
allocator_active:538828800
allocator_resident:545976320
total_system_memory:8332824576
total_system_memory_human:7.76G
used_memory_lua:31744
used_memory_vm_eval:31744
used_memory_lua_human:31.00K
used_memory_scripts_eval:0
number_of_cached_scripts:0
number_of_functions:0
number_of_libraries:0
used_memory_vm_functions:32768
used_memory_vm_total:64512
used_memory_vm_total_human:63.00K
used_memory_functions:184
used_memory_scripts:184
used_memory_scripts_human:184B
maxmemory:1073741824
maxmemory_human:1.00G
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:280080
allocator_rss_ratio:1.01
allocator_rss_bytes:7147520
rss_overhead_ratio:0.90
rss_overhead_bytes:-53522432
mem_fragmentation_ratio:0.92
mem_fragmentation_bytes:-45386040
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_total_replication_buffers:0
mem_clients_slaves:0
mem_clients_normal:536874532
mem_cluster_links:0
mem_aof_buffer:0
mem_allocator:jemalloc-5.2.1
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0
# Keyspace
127.0.0.1:6379>
I think there are two small problems of Redis in this test case: 1. client always anticipate that we'll see another fat argument, but not consider the expense.
networking.c
int processMultibulkBuffer(client *c) {
...
c->argv[c->argc++] = createObject(OBJ_STRING,c->querybuf);
c->argv_len_sum += c->bulklen;
sdsIncrLen(c->querybuf,-2); /* remove CRLF */
/* Assume that if we saw a fat argument we'll see another one
* likely... */
c->querybuf = sdsnewlen(SDS_NOINIT,c->bulklen+2); ---------------------->here we should consider if the bulklen is too long. IMO, it is appropriate to limit the max length to the PROTO_MBULK_BIG_ARG*2
sdsclear(c->querybuf);
...
}
2. used_memory_peak not updated when a oom error reply to client, it looks weird for user
evict.c
int getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *level) {
...
/* Check if we are over the memory usage limit. If we are not, no need
* to subtract the slaves output buffers. We can just return ASAP. */
mem_reported = zmalloc_used_memory();
if (total) *total = mem_reported;
if (server.stat_peak_memory < mem_reported) {
server.stat_peak_memory = mem_reported;----------------> /*we should immediately update memory peak in here, so when eviction occurs, we can see that used_memory_peak is indeed greater than maxmeory through info command*/
}
...
}
Comment From: judeng
I'm trying to fix this bug, but I have a question now: why is redis updated the used_memory_peak in serverCron and info command, it looks like that real-time updates in zmalloc just need to add an IF statement, which does not look affecting performance
Comment From: hwware
It looks like it should not happen because in the redis documentation, it says that elements representing single strings, are normally limited to 512 mb. Please try to fix, Thanks
Comment From: hwware
@judeng I think this maybe not be a bug, this happen due to the maxmemory_policy. If you change the maxmemory_policy to allkeys-lru instead of noviction, it works well.
BTW: the default maxmemory_policy is noviction
Comment From: sundb
@hwware it doesn't seem to be related to maxmemory_policy.
Please see the comment Assume that if we saw a fat argument we'll see another one likely...
c->argv[c->argc++] = createObject(OBJ_STRING,c->querybuf); <- alloc 512M memory
c->argv_len_sum += c->bulklen;
sdsIncrLen(c->querybuf,-2); /* remove CRLF */
/* Assume that if we saw a fat argument we'll see another one
* likely... */
c->querybuf = sdsnewlen(SDS_NOINIT,c->bulklen+2); <- alloc 512M memory
sdsclear(c->querybuf);
This will cause the overall memory to exceed maxmemory(1GB).
Comment From: judeng
This will cause the overall memory to exceed maxmemory(1GB).
Agree, and when maxmemory_policy is allkesy-lru and Redis has many keys that occupies a total of 500MB of space , redis will evict all these keys due to the fat client. maxmemory-clients perhapse helpfull, but still didn't solve the problem.
Comment From: judeng
I have created a pr to fix this bug, please see #11833
Comment From: hwware
c Assume that if we saw a fat argument
Yes, you are right, This case is not related to the maxmemory_policy. Becuase I notice there is no any key existing when this error happen (It should at least store 2 keys in the database). But overall memory exceed maxmemory is acceptable, it will cause key eviction process happens if there is a new command comming. (Of course, if the maxmemory_policy is not noeviction)
Comment From: judeng
But overall memory exceed maxmemory is acceptable, it will cause key eviction process happens if there is a new command comming.
@hwware Yes, I re-did a similar test, same conclusion as yours, please refer here