How is it possible when I set maxmemory 64mb in redis.conf that the peak memory usage is almost twice its size -> used_memory_peak_human:106.23M?
I've seen this on 5.0.7 and git master.
$ redis-cli -s /tmp/redis.sock info memory
# Memory
used_memory:3194560
used_memory_human:3.05M
used_memory_rss:135745536
used_memory_rss_human:129.46M
used_memory_peak:111389024
used_memory_peak_human:106.23M
used_memory_peak_perc:2.87%
used_memory_overhead:2198952
used_memory_startup:1016656
used_memory_dataset:995608
used_memory_dataset_perc:45.71%
allocator_allocated:3110992
allocator_active:135707648
allocator_resident:135707648
total_system_memory:34359738368
total_system_memory_human:32.00G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:67108864
maxmemory_human:64.00M
maxmemory_policy:noeviction
allocator_frag_ratio:43.62
allocator_frag_bytes:132596656
allocator_rss_ratio:1.00
allocator_rss_bytes:0
rss_overhead_ratio:1.00
rss_overhead_bytes:37888
mem_fragmentation_ratio:43.63
mem_fragmentation_bytes:132634544
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:66664
mem_aof_buffer:0
mem_allocator:libc
active_defrag_running:0
lazyfree_pending_objects:0
Comment From: tessus
Sorry, I will ask this on https://groups.google.com/forum/m/#!forum/Redis-db
Comment From: tessus
Hmm, posted a topic in google groups, but it's not showing up.
Comment From: itamarhaber
xref: https://groups.google.com/d/msg/redis-db/nKHdXLSbmGs/iBRojtgQBAAJ
Comment From: tessus
@itamarhaber I have the feeling that this is a bug, so I will update the first post accordingly, with steps to reproduce, version info, .... or shall I close this and open a new one?
Comment From: itamarhaber
Sorry for being out of focus... it isn't a bug, used_memory_peak refers to the peak of overall memory consumption, which also includes elements other than the dataset, such as buffers and forks IIRC. It can, therefore, exceed the maxmemory directive that concerns the data itself.
Comment From: tessus
This seems odd though.
In that case I don't need a maxmemory parameter, if it's not honored. Either there's a limit for how much memory can be used by Redis, or there isn't. There's no inbetween. It's a mutually exclusive premise.
The following example is extreme, but possible: Swap is turned off. Machine has 8 GB RAM. maxmemory is set to 7GB. -> crash - because Redis ignores the maxmemory limit.
I set it to 64MB, but the peak was about twice that. So it's not exceeding it. It's ignoring it. Exceeding would probably be like 5% over, but not 100% over.
In either case, I seriously do not understand how this is possible. If there's a re-alloc happening or a malloc which would exceed maxmem, don't allocate. So apparently this check is not working or ignored, or not present.
Comment From: AngusP
@tessus I think the intent of the maxmemory is more towards setting a maximum dataset size, so that it is easy to reason about if used as a cache or in a similar use. For example, a 2GB cache will keep a fairly predictable amount of data if what you're caching is of fairly uniform size. The maxmemory will then set the point at which evictions start happening. You might even want to run a bunch of Redis-as-a-cache instances on the same machine. In this use case there's less of a concern about going over that maxmemory limit when client buffers and internal Redis memory overhead is considered, assuming there's still a fair amount of available memory in the system.
In a somewhat silly example, you could set the maxmemory to 512MB, and store a single key (e.g. a large image) that takes all of that 512MB. If you then read the key, Redis will need to copy the whole key into the client output buffer, taking >512MB more memory to do so, as responses are (as far as I recall, anyway) copies, not references, to the original data. There are also Redis commands that don't return from the dataset (INFO, LOLWUT, ...) so they too would need to allocate memory beyond what has already been taken. If your machine has insufficient memory to be able to do this then I guess Redis will abort() and crash.
While I can see why being able to arbitrarily cap memory usage under the system maximum would be useful, if a call to malloc fails, Redis (and most other software) isn't designed to be able to continue in that condition, so can't do much else but crash, or return nothing but errors until it can free up some space. There are seemingly a few fancy ways of doing this on some operating systems, like Linux's control groups (or VMs).
Comment From: tessus
@AngusP I'm sorry, but this still makes no sense. Redis knows its current memory usage, so if I requested a new structure which were to exceed maxmemory, it could just return the error that no new structure can be added/set. But this is not the case. So why is there a maxmemory setting?
In your example you talk about server and client memory. Client memory should not be added to the server's peak memory, because the client uses its own memory pool.
So however you turn it, something is off.
Comment From: AngusP
In your example you talk about server and client memory. Client memory should not be added to the server's peak memory, because the client uses its own memory pool.
Sorry, when I said client memory, I mean the server-side memory that is allocated for the purpose or responding to a client.
I get what you're saying, but to implement a hard memory limit, Redis would have to leave itself room for copying the entire dataset into (so a 1GB maxmemory would need approx 2GB of system memory, a 32GB max 64GB, etc.), so that it is capable of responding to a DUMP or MULTI requesting every key, or it would become more and more error-prone as the size of the dataset went past half of maxmemory and approached full. This would also make replication and persistence more complex as they too use memory outside of the dataset.
A possible solution might be a maxoverheadmemory config option to cap this non-dataset memory usage, so that Redis' total usage never exceeded maxmemory + maxoverheadmemory, but I'm not sure how many people would need this given it'd be non-trivial to implement
Comment From: tessus
I'm ok, if Redis won't go over double the limit I set, in which case I just set it to 50% of what I want.
My main concern was that peak memory was much higer than what was set in maxmemory.
On the other side, maybe the server should make the adjustments internally. So when I set it to 1 GB, it uses 512 MB internally, which would then be the 1GB I really want as the cap. Do I make any sense?
Comment From: tessus
Actually I just set maxmemory 32mb and after running redis-benchmark --dbnum 10 -q -c 20 -n 100000 -P 1000 the following happened: used_memory_peak_human:107.14M
Hmm, ok, now I give up. maxmemory just does not work as intended - at least not when compared to the description:
# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
Nobody can tell me that this behavior is not a bug. It's the same as not setting a limit. So what now?
Comment From: AngusP
I didn't think of it earlier, but if you're on a Linux machine and have transparent huge pages enabled (which is a common default), this can cause "memory usage issues with Redis". Turning it off might reduce the size of your memory overhead.
But the general point, which I think could be made clearer in the redis.conf comment you reference, is that maxmemory is a policy that only effects the maximum size of the database (and caching behaviour, depending on eviction policy maxmemory-policy), and is not used as a limit for Redis' overhead memory and network/replication/persistence buffers etc. This means it won't really directly relate to used_memory_peak_human:107.14M -- it instead is a constraint on used_memory_dataset.
So, I think it might be more that the conf file comment is a bug for not being clear enough, and there is also a feature request for allowing safer guarantees that Redis won't OOM ungracefully and keep itself within a specified memory usage?
Comment From: tessus
I'm still not sure we've come to a conclusion yet.
Please answer the following: Why is there a maxmemory parameter, when it is ignored and uses more than twice the size that is set. (I've seen 3 times the value as peak memory.) The thing is I'm not even sure if any limit is respected at all.
So Redis either honors a setting or it does not. But in the latter case, the parameter is useless and can be removed. All previous arguments are interesting excuses, but a limit is supposed to be a limit. Otherwise I don't need it. You can't tell people to set a parameter so that Redis will start evicting pages or return an error on write (and not using more memory than specified in both cases), when neither is true.
Comment From: AngusP
Please answer the following: Why is there a maxmemory parameter, when it is ignored and uses more than twice the size that is set.
Because the (seemingly poorly named) maxmemory limits the size of the dataset, not the total RSS
The thing is I'm not even sure if any limit is respected at all.
It absolutely is, there are tests covering max memory cases in tests/unit/maxmemory.tcl, for instance
https://github.com/antirez/redis/blob/8105f91a0202829c685da7967c2f60d477c2d1e9/tests/unit/maxmemory.tcl#L29
If those tests are failing for you then that’s a whole different problem! I’ll assume they’re passing?
The use case you appear to be asking for is a hard RSS limit, which isn’t currently implemented
Comment From: tessus
If those tests are failing for you then that’s a whole different problem! I’ll assume they’re passing?
They pass. But that's not the point. I rather meant that maybe the test itself is broken.
The use case you appear to be asking for is a hard RSS limit, which isn’t currently implemented
I think you still didn't get my point. Of course I'm asking for a limit on the max memory the app is using. That's what max memory means. At least in every other app in the world. I seriously don't care about the internals. If I set maxmemory, I want that the app doesn't use more memory than that. Otherwise this parameter is useless!!!
Why would anyone set this, if noone knows how much memory will actually be used? As I said I have seen 3 times the value of max memory used in peak memory.
Comment From: mkuchen
@tessus @AngusP I'm experiencing this behavior on a high-throughput application with a large Redis caching instance running redis==5.0.8 on Heroku.
It's causing major issues with response times because the Redis instance is overloaded. The maxmemory is set to 25GB but it's gone as high as 55GB. At that point Redis performance is far, far worse than just fetching the data from our Postgres instance.
Do you guys have any thoughts on how to mitigate the issue? I've also looked into running multi-node Redis for caching but that seems like an unnecessarily complex setup.
Comment From: tessus
@mkuchen well, it all depends. I don't know the Redis code, thus I can't say, if there is any latch contention or other potential bottlenecks.
If parts of the 55GB memory space are actually swapped to disk, there's no way to improve perf. I suspect that you might have hit the limit and Redis returns errors and/or can't evict pages, which, depending on how fast such an error is returned, could create a snowball effect. (In this case point number 2 below would not help either.) I'm just speculating here. Without profiling and tracing there's too less info to give a definitive answer.
IMO the maxmemory parameter is absolutely useless. It does not limit the memory as the name would suggest. But here are my suggestions:
- buy more RAM and remove the limit
- set maxmemory to 1/4 th of the limit. So in your case, set it to 6-8GB
- change the app so that keys are expired or re-used
Either way, you will have to experiment, since you can't trust the value you set.
If your company can shell out some money for a support contract, I guess RedisLabs would be a good idea.
Sorry that I couldn't be of more help.
Comment From: zippy-zebu
I have to agree with @tessus . Maxmemory simply does not do what is described. This is totally different than memcached, hyperdex or aws rds.
We have set this to 50GB (with 128 GB RAM) thinking the same way (hugepage was disabled) in our production. But memory keeps increasing until kernel oom killer kills it. This is simply not acceptable.
And what is used_memory_dataset in this case ? how do we found out what should used_memory_dataset be ? If it doesn't respect used_memory_human remove the settings or change the description.
I think it is better to introduce maxoverheadmemory as explained in the other comment .
Comment From: rafaelsierra
If maxmemory is not related to RSS, how are you supposed to know which value to set it? Should your database be 50% of the available memory? 30%? There is no math you can do to reliably say how much memory redis will actually need while running, and since the whole point (in my case using it as cache) is to be fast, falling back to swap is not an option, I have to keep all my database in memory.
Comment From: PingXie
I think there is a lot of value in implementing what @tessus proposed here. There are many legit use cases where an application would rather get an OOM failure or even connection drops as opposed to driving Redis to crashes and losing all data (or living with painfully slow swapping). An explicit contract that bounds heap memory usage would be ideal for these applications. Note that I specifically leave out the stack and cache memory because these types of memory are not significant contributors.
On the other hand, as @AngusP mentioned earlier though, the enforcement of this new contract is not going to be very straightforward as every memory allocation would now need to be patched up with a check that ensures the allocation, if allowed, will not push Redis over the limit. This enforcement can be further complicated by Redis 6.x's IO multi-threading where allocations can happen on multiple threads concurrently. Lastly but not least, the caller that allocates the memory would now need to be prepared to deal with allocation failures and the exact error handling would differ from one code path from another (think of replication vs command handling).
I think I might have some time to help improve Redis in this area. It might take quite a few iterations to patch up all allocations to honor this new setting though. As a starter, I would prefer a single threshold that defines this max memory allowance. The math of adding maxmemory and maxoverheadmemory is not self-explanatory and hence can be confusing. Since maxmemory (it really should be called max_db_memory to begin with but anyways) is taken, I wonder if "max_heap_memory" would be the next best option?
Comment From: QuangTung97
I have to agree with @tessus . Maxmemory simply does not do what is described. This is totally different than memcached, hyperdex or aws rds.
We have set this to 50GB (with 128 GB RAM) thinking the same way (hugepage was disabled) in our production. But memory keeps increasing until kernel oom killer kills it. This is simply not acceptable.
And what is
used_memory_datasetin this case ? how do we found out what shouldused_memory_datasetbe ? If it doesn't respectused_memory_humanremove the settings or change the description.I think it is better to introduce
maxoverheadmemoryas explained in the other comment .
Does you turn on RDB persistence? Redis using "fork" so with a kinda big memory usage (50GB), its "fork" can cause double the RAM.
Unless with big re-implemtation using memcached slab allocation or something similar, instead of relying on jemalloc, the maxoverheadmemory setting will probably never come to life.
I will settle for memcached for now.
Comment From: garry-t
Run into same problem. I was confused by nature of parameter name and what exactly it means for redis it self.
Currently running stress test for redis in HA mode with 2GB memory. And set limit 80% of RAM ~ 1.6GB.
Eventually redis process killed when memory usage according to redis exporter reached to 1.46 GB.
In kernel.log I see:
kernel: [ 6212.230271] Out of memory: Killed process 10680 (redis-server) total-vm:3210104kB, anon-rss:1679516kB, file-rss:2988kB, shmem-rss:0kB, UID:996 pgtables:5440kB oom_score_adj:0
total-vm:3210104kB = 3.2 GB :)
I confused, how to properly set this limits to avoid any OOM issue.
I set 70% memory maxlimit and set maxmemory_policy: "allkeys-random" in my testing case.
Now works seems stable
Comment From: tessus
@garry-t unfortunately this parameter does not do anything.
The explanation why this parameter is not limiting the memory did not make any sense to me. It's the first time I've come across a project where setting a value for max memory is ignored and apparently the devs think that this is the most normal thing ever.
As for a solution, you will have to setup your landscape in a way where redis nodes can just be respawned (either via systemd or container management) and set resource limits on the OS side (e.g. cgroups).
Comment From: garry-t
@tessus Yes, Agree. Once client connection amount reached to 1000, eviction process is not fast enough and my master kills due OOM, sentinel fail-over it to another replica, then systemd restart old master and it is in cycle :) . During this night it happened 4 times. So, yeah problem exists.
Comment From: garry-t
Updates: I was able to make Redis stable in intensive write/read tasks, but only for 500 clients.
Redis : 7.2.4, Sentinel and Redis installed on same machine 3 node in cluster.
Environment:
Python 3.11 with threads and connection pool. 500 clients and every second write 2500KB and read same key, no TTL set. VM - 2 CPU and 2GB RAM. Ubuntu 20.04 Redis TLS enabled. redis.conf
oom-score-adj yes
oom-score-adj-values -15 200 800 (This should not be used, since oom-score-adj set to yes. see docs )
maxclients 1000
maxmemory 1288490188 (60% RAM)
tcp-backlog 10000
tcp-keepalive 300
maxmemory-policy allkeys-lru (For test I'm ok)
maxmemory-eviction-tenacity 100 ( Most Important thing)
# Append Only Mode
appendonly yes
appendfilename "appendonly.aof"
appenddirname "appendonlydir"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# Snapshotting
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
After set : maxmemory-eviction-tenacity to 100 and oom-score-adj to yes. OOM problem has gone. At least stress test run time was 4h, no master failovers. Maybe for someone will be useful.
For 1000 clients and 100KB payload amount of keys in DB increased from 500K to 30Mil, and eviction process became insufficient and evicts in some reason less keys than previously for 2500KB payload, all this leads to memory usage grows step by step and eventually memory ends :).
So I think evictions process should be more aggressive , even I set "maxmemory-samples 64" so I thought it should be more aggressive, but doesn`t help.
P.S.: I know :). I dont want to disable RDB and AOF.
Comment From: QuangTung97
@garry-t What is the distribution of key sizes of your stress test. I think the OOM will be more likely when there are a wide range of key sizes
Comment From: garry-t
@QuangTung97 key size were constants 2500KB and 100KB.
Comment From: Pradeep205
Im using Redis db which have capacity of 1gm and I stored 10mb data in one key with json, one of my api is accessing that key 10hits per second, is spiking suddenly throwing memory outage issue, causing spike, any leads?
Comment From: tessus
I'm closing this. I don't expect this to be addressed by Redis. Especially after what they pulled earlier this year. Goodbye Redis.
@Pradeep205 this might be another issue. I suggest to open a new one.