If messages comes too fast in a stream until the memory is full, the redis will be very slow.
Can limit the stream maximum memory size, and over the size, messages will store in the disk.
That's no load all the data to memory in a stream.
Comment From: oranagra
Redis doesn't deal with offloading part of the data to disk. There is an eviction mechanism to delete keys when the overall memory usage is too high, but there's no mechanism to keep track of the memory usage of each key, so limiting the number of records in a stream by their memory usage rather than count is not currently possible.
Let me know if you have any followup questions or suggestions. Meanwhile I'm closing the issue.
Comment From: bnuzhouwei
I wish the new data structure Stream (designed as Message Queue) can offload part of the data to disk, because when millions of messages need to be guaranteed not to be lost, need to be guaranteed to be consumed, need to support expansion, need to allow stacking, Redis can't offer a perfect solution, while Kafka does.