We are using redis stream in our application. we have 2 docker service for the same. Service1(Java) publishing data to stream & Service2(C++) consuming that data & after consuming we are deleting that entry using xdel from stream.
Now requirement is that if Service2 gets down then data should be stored in stream until the allowed maxmemory size exhausted (we will set maxmemory for that around 50-100MB) So if that memory limit reaches then it should delete old data 1 by 1.
But current behavior with config(maxmemory:50mb, maxmemory-policy:allkeys-lru) is that it is not deleting 1 by 1. it deletes whole data from stream. Example:-
(integer) 93 127.0.0.1:6379> xlen MyStream (integer) 94 127.0.0.1:6379> xlen MyStream (integer) 0 127.0.0.1:6379> xlen MyStream (integer) 1 127.0.0.1:6379> xlen MyStream (integer) 2
Is there any solution/configuration for this requirement
Comment From: itamarhaber
Hello @brajendrasingh,
Redis' eviction policy (i.e. the maxmemory-policy) applies to entire keys, not their nested values. That is why "it deletes whole data from stream".
To implement your requirement, you can:
- Set the
maxmemoryto 50-100MB - Set the
maxmemory-policytonoeviction- this will generate an out-of-memory error once memory is exhausted - Extend your publisher (Java) to handle OOM errors returned from the server (e.g. by calling
XTRIMon the stream) and then retrying to publish
Comment From: brajendrasingh
Hello @itamarhaber
Can we remove data from stream from both end using XTRIM I mean frontend & backend removal Because as I can see only one side removal is there using XTRIM And we want to implement both side removal in our App
Comment From: itamarhaber
If I understand your question correctly, then no - XTRIM only trims entries from the stream's beginning. I am missing the reason why you'd want to remove the newest messages from the stream though.