We recently had an issue where redis would needlessly free keys when at maxmemory while under heavy client load. This turned out to be because on each loop through aeMain, redis would handle many pending client GET requests with large returned value sizes. To get back below maxmemory, redis would thus free keys after each client request, even though the buffers were tmporary.

We fixed this by calling handleClientsWithPendingWrites() in processInputBuffer() to prevent this "reply backlog". Another solution would be to not include pending client output buffer memory in freeMemoryGetNotCountedMemory(), perhaps.

Curious if others have an opinion on this. Happy to submit a patch if people feel this is an actual bug.

Comment From: itamarhaber

Hello @g-d-l

IMO this is indeed an issue should be addressed somehow - @antirez @oranagra @soloestoy WDUT?

Comment From: soloestoy

We have met a similar problem and discussed it in #4668, and the key point is that as @antirez said:

the problem is that normal Redis users do not know all these details, so when they set "maxmemory 4GB" they expect the server to use maximum 4GB.

And at that time, we agreed that we should not do any change, we can reconsider it now.


Another solution would be to not include pending client output buffer memory in freeMemoryGetNotCountedMemory()

But I think it's not a good idea, if too many clients have too many output buffer, that may lead to OS OOM.

Comment From: oranagra

@g-d-l please note that the solution you implemented violates the AOF fsync-always mechanism. replies will be sent to the client before being written and synced to the disk.

if we distinguish between process crash and system crash and power loss, we can consider any write to the AOF (even without an fsync) safe, in which case this change violates any AOF configuration.