When memory stats show rss-overhead.bytes < 0, info memory shows rss_overhead_bytes as an extremely large number. I expect it is being treated as unsigned rather than signed.
See output from echo -e "multi\nmemory doctor\ninfo server\ninfo memory\nmemory stats\nexec" to redis-cli
QUEUEDQUEUEDQUEUEDQUEUED
Hi Sam, I can't find any memory issue in your instance. I can only account for what occurs on this base.
# Server
redis_version:5.0.0
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:7145853456ecc6a
redis_mode:standalone
os:Linux 3.10.0-957.el7.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:4.8.5
process_id:7751
run_id:96589716a6aa7e6c2f812a56799112a725bbde67
tcp_port:7000
uptime_in_seconds:133
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:16703054
executable:/bin/redis-server
config_file:/opt/DASoftware/redis/conf/7000.conf
# Memory
used_memory:278460312
used_memory_human:265.56M
used_memory_rss:282845184
used_memory_rss_human:269.74M
used_memory_peak:281782376
used_memory_peak_human:268.73M
used_memory_peak_perc:98.82%
used_memory_overhead:4078157
used_memory_startup:512568
used_memory_dataset:274382155
used_memory_dataset_perc:98.72%
allocator_allocated:278469736
allocator_active:280657920
allocator_resident:291303424
total_system_memory:8201719808
total_system_memory_human:7.64G
used_memory_lua:52224
used_memory_lua_human:51.00K
used_memory_scripts:752
used_memory_scripts_human:752B
number_of_cached_scripts:2
maxmemory:3221225472
maxmemory_human:3.00G
maxmemory_policy:noeviction
allocator_frag_ratio:1.01
allocator_frag_bytes:2188184
allocator_rss_ratio:1.04
allocator_rss_bytes:10645504
rss_overhead_ratio:0.97
rss_overhead_bytes:18446744073701093376
mem_fragmentation_ratio:1.02
mem_fragmentation_bytes:4447232
mem_not_counted_for_evict:15
mem_replication_backlog:1048576
mem_clients_slaves:99372
mem_clients_normal:2023346
mem_aof_buffer:15
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
peak.allocated
281782376
total.allocated
278460304
startup.allocated
512568
replication.backlog
1048576
clients.slaves
99372
clients.normal
2023346
aof.buffer
15
lua.caches
752
db.0
overhead.hashtable.main
361176
overhead.hashtable.expires
12560
db.9
overhead.hashtable.main
19656
overhead.hashtable.expires
32
db.11
overhead.hashtable.main
72
overhead.hashtable.expires
32
overhead.total
4078157
keys.count
7781
keys.bytes-per-key
35721
dataset.bytes
274382147
dataset.percentage
98.717170715332031
peak.percentage
98.821052551269531
allocator.allocated
278469736
allocator.active
280657920
allocator.resident
291303424
allocator-fragmentation.ratio
1.0078579187393188
allocator-fragmentation.bytes
2188184
allocator-rss.ratio
1.0379304885864258
allocator-rss.bytes
10645504
rss-overhead.ratio
0.97096413373947144
rss-overhead.bytes
-8458240
fragmentation
1.0159744024276733
fragmentation.bytes
4447232
Comment From: andrewsensus
a similarly large value is seen in https://github.com/antirez/redis/issues/5111
Comment From: oranagra
fixed by https://github.com/antirez/redis/pull/5633