The problem/use-case that the feature addresses

A description of the problem that the feature will solve, or the use-case with which the feature will be used.

RESTORE key 0 serialized APPEND

third-party tools can parse rdb to redis DUMP format. but once a key-value is very big in rdb file(like 4~5GB per key), then generated RESTORE command will with very big bulk body, it`s hard to transfer to target redis

if restore command add an APPEND argument. we can use following method to split big key piece by piece and transfer to target redis

>multi
>RESTORE key 0 serialized-0 REPLACE
>RESTORE key 0 serialized-1 APPEND
>RESTORE key 0 serialized-2 APPEND
>exec

above method can avoid to generate very big bulk body

but for now we can do something that parse rdb file to resp raw command to migrate to target redis like following

>multi
>hmset key field value field1 value1
>hmset key field2 value2 field3 value3
>hmset key field3 value3 field4 value4
>exec

raw command's size usually bigger than binary protocol like restore command.

Comment From: oranagra

@leonchen83 thanks for the suggestion. we're well aware of the problem, and we have various plans to solve it in the future.

One is to use RESP3 chunked format so that the sender doesn't have to cache the entire payload and compute it's size before sending the first byte of the payload. The other is some mechanism for executing the RESTORE command in the background (while the key is locked), which would mean we can start parse the data and populate the temporary key elements incrementally, so redis doesn't need to store the entire encoded payload and the new deserialized key at the same time. This will solve the long blocking command / latency issues, and the memory issues.

It'll still take long time before we get to implement these, but i don't think the solution you proposed is a valid one as an intermediate mitigation.

some additional details can maybe be found here: https://github.com/redis/redis/issues/9794#issuecomment-1012192450