Describe the bug I am using Bitnami's redis helm chart(redis:6.2.7) to deploy my redis (3 nodes + 3 sentinels) in kubernetes. Normally there will be 1 master node and 2 slave nodes.
However, after restarted the Kubernetes cluster, all 3 redis nodes are selected as master node. It is pretty much like a brainsplit, but still those 3 nodes are trying to sync RDB and AOF data from each other.
And sentinels seems didn't detected this situation.
To reproduce Forcely restart kubernetes cluster could reproduce this issue with a chance like 20%.
Expected behavior
Sentinels notices this situation and automatically failover 2 of the 3 masters.