The problem/use-case that the feature addresses

We want to limit the max length of key and value (in bytes) to avoid very big key/values to degrade performance.

Description of the feature

We hope we can have an option to configure that in redis.conf and when the key/value exceeds the max length, report an error and reject the write. Actually we can negotiate the max length at handshake, and reject the key before or during network transferring.

Alternatives you've considered

We currently have a plan to implement that at the client library on top of lettuce or jedis.

Additional information

N/A

Comment From: oranagra

@DanielYWoo in what way do these huge keys degrade performance?

We are aware of several issues (mainly DUMP, RESTORE, MIGRATE) which we intend to resolve some day. I don't think we're likely to add a mechanism that blocks them.

But please add more information as to what problems you are facing and at what stage do you propose to block them? E.G. Do you want to block an innocent HSET because the target key became too big? is your problem related to memory footprint or number of elements?

Comment From: DanielYWoo

The problem I am facing is that our developers sometimes overuse Redis to save large key/values and I hope there are mechanism to reject such writes.

e.g, SET some_key some_string_with_one_million_chars
e.g, HSET some_hash some_key some_value_with_one_million_chars
e.g, ZRANGE myzset_with_one_million_values 0 -1

Yes, it could cause memory problems but that's not my major problem, the problem here is the latency with big keys at peak throughput hours.

Comment From: itamarhaber

This requirement sounds like enforcing a schema of sorts on the values of keys. The concept is indeed interesting and we can consider it for upcoming versions.

Comment From: madolson

The scope still seems a little too ambiguous. I'm not a big fan of restrictions on command arguments outside of ACLs. Just with the set command, we would limit both the number of keys that can be set in the Redis server and the max length of the keys and the values. @DanielYWoo Did you consider running multiple Redis servers with different memory limitations so that one bad node can't brown out other parts of your service?

Comment From: oranagra

There are multiple problems here each with possibly a different solution.

If the problem is latency (caused by a large ZRANGE), you can maybe reduce client-output-buffer-limit and have redis disconnect the client after a certain output buffer size. It is not possibly to respond to the command with an error after it already started copying data to the output buffer, so measuring latency or predicting the size of the response beforehand is not an option, and the only option we are left with (if we want to respond with an error rather than drop the connection) is to look at the cardinality, but it seems to be very unlikely that we wanna support such a feature.

If you wanna limit the size of the string passed to SET and HSET, you can set the proto-max-bulk-len config, but note that it won't help you protect against a pair of SETRANGE and then GET.

Another thing that i think is problematic here is that any synthetic limit will fail some commands as soon as they reach a threshold. i.e. some traffic runs against a sorted set and everything seems good (working with 250-254 elements, or strings up to 99kb), but as soon as something minor changes and the threshold is crossed, suddenly commands start failing.

Comment From: DanielYWoo

@madolson This is not about ACLs or maxmemory, this is about per command limit to avoid high latency. @oranagra gave some workarounds like client-output-buffer-limit which works fine for writes, and I guess client-query-buffer-limit will work for reads. Although this cannot limit the string length of a single SET, but this already satisfies my requirements which is reject huge commands. Many thanks for both of you.