As we know that in hash table, load factor is important for controlling conflict.

In Java/HashMap, the default load factor is 0.75, And in CPython/dict, the load factor is set to 2 / 3

However, in Redis/dict, it is 1.0 (when dict_can_resize is enabled), why?

/* If we reached the 1:1 ratio, and we are allowed to resize the hash
 * table (global setting) or we should avoid it but the ratio between
 * elements/buckets is over the "safe" threshold, we resize doubling
 * the number of buckets. */
if (d->ht[0].used >= d->ht[0].size &&
    (dict_can_resize ||
     d->ht[0].used/d->ht[0].size > dict_force_resize_ratio))
{
    return dictExpand(d, d->ht[0].used*2);
}

In my view, load factor should be less than 1. A high load factor might increase the lookup cost due to the possible highly conflict rate.

Comment From: wenbochang888

Do you have any idea? I alse want to know.

Comment From: oranagra

I'm not sure what were the reasoning, maybe related to the type of hash function redis uses or the way the dict is built (lookup table with linked lists). these may be different than what's in Java and Python. But anyway, fast lookup is not the only concern here. Memory consumption is a factor too. Rehashing sooner will mean the hash table will consume more memory, and in some cases that can come on the expense of value (will cause eviction). Also, since we rehash to the next power of 2, this can cause a very sparse dict, which also has implication (less efficient picking random keys, and SCAN).