Describe the bug
When the bus_port is inaccessible, creating a cluster using redis-cli will result in an infinite wait without returning a failure
To reproduce
k8s create redis pod yaml
# Redis ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
data:
redis.conf: |
port 6379
cluster-enabled yes
cluster-config-file /data/nodes.conf
cluster-node-timeout 5000
cluster-require-full-coverage no
---
apiVersion: v1
kind: Pod
metadata:
name: redis-1
labels:
app: redis-1
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
command: ["redis-server", "/redis/redis.conf"]
volumeMounts:
- mountPath: /redis
name: redis-config
volumes:
- configMap:
name: redis-config
name: redis-config
---
apiVersion: v1
kind: Service
metadata:
name: redis-1
spec:
selector:
app: redis-1
ports:
- port: 6379
targetPort: 6379
---
apiVersion: v1
kind: Pod
metadata:
name: redis-2
labels:
app: redis-2
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
command: ["redis-server", "/redis/redis.conf"]
volumeMounts:
- mountPath: /redis
name: redis-config
volumes:
- configMap:
name: redis-config
name: redis-config
---
apiVersion: v1
kind: Service
metadata:
name: redis-2
spec:
selector:
app: redis-2
ports:
- port: 6379
targetPort: 6379
---
apiVersion: v1
kind: Pod
metadata:
name: redis-3
labels:
app: redis-3
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
command: ["redis-server", "/redis/redis.conf"]
volumeMounts:
- mountPath: /redis
name: redis-config
volumes:
- configMap:
name: redis-config
name: redis-config
---
apiVersion: v1
kind: Service
metadata:
name: redis-3
spec:
selector:
app: redis-3
ports:
- port: 6379
targetPort: 6379
---
apiVersion: v1
kind: Pod
metadata:
name: redis-4
labels:
app: redis-4
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
command: ["redis-server", "/redis/redis.conf"]
volumeMounts:
- mountPath: /redis
name: redis-config
volumes:
- configMap:
name: redis-config
name: redis-config
---
apiVersion: v1
kind: Service
metadata:
name: redis-4
spec:
selector:
app: redis-4
ports:
- port: 6379
targetPort: 6379
---
apiVersion: v1
kind: Pod
metadata:
name: redis-5
labels:
app: redis-5
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
command: ["redis-server", "/redis/redis.conf"]
volumeMounts:
- mountPath: /redis
name: redis-config
volumes:
- configMap:
name: redis-config
name: redis-config
---
apiVersion: v1
kind: Service
metadata:
name: redis-5
spec:
selector:
app: redis-5
ports:
- port: 6379
targetPort: 6379
---
apiVersion: v1
kind: Pod
metadata:
name: redis-6
labels:
app: redis-6
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
command: ["redis-server", "/redis/redis.conf"]
volumeMounts:
- mountPath: /redis
name: redis-config
volumes:
- configMap:
name: redis-config
name: redis-config
---
apiVersion: v1
kind: Service
metadata:
name: redis-6
spec:
selector:
app: redis-6
ports:
- port: 6379
targetPort: 6379
k8s svc ip
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-1 ClusterIP 172.19.184.80 <none> 6379/TCP 3m36s
redis-2 ClusterIP 172.19.40.177 <none> 6379/TCP 3m36s
redis-3 ClusterIP 172.19.177.153 <none> 6379/TCP 3m36s
redis-4 ClusterIP 172.19.245.202 <none> 6379/TCP 3m36s
redis-5 ClusterIP 172.19.90.76 <none> 6379/TCP 3m36s
redis-6 ClusterIP 172.19.179.203 <none> 6379/TCP 3m36s
k8s exec redis-cli create redis-cluster
kubectl exec -it redis-1 -- redis-cli --cluster create --cluster-replicas 1 172.19.184.80:6379 172.19.40.177:6379 172.19.177.153:6379 172.19.245.202:6379 172.19.90.76:6379 172.19.179.203:6379
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.19.90.76:6379 to 172.19.184.80:6379
Adding replica 172.19.179.203:6379 to 172.19.40.177:6379
Adding replica 172.19.245.202:6379 to 172.19.177.153:6379
M: ec7a57860be9c3a082e6b98920c6014a6cf45af7 172.19.184.80:6379
slots:[0-5460] (5461 slots) master
M: c7d130bf6348dbc0c36194def32bd8d832a87fcf 172.19.40.177:6379
slots:[5461-10922] (5462 slots) master
M: 73035597353c5d5f7794c9a9efd539b9ce23c095 172.19.177.153:6379
slots:[10923-16383] (5461 slots) master
S: 5d9a3714a44934e1f36da8e669ac939347404d9d 172.19.245.202:6379
replicates 73035597353c5d5f7794c9a9efd539b9ce23c095
S: d5884f26da055197de5e6faa9fe2ade32223417c 172.19.90.76:6379
replicates ec7a57860be9c3a082e6b98920c6014a6cf45af7
S: 30348065e1c898fa1f08b3113ae815552e0910eb 172.19.179.203:6379
replicates c7d130bf6348dbc0c36194def32bd8d832a87fcf
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
............................................................................................
Expected behavior
don't stay indefinitely without exiting
Additional information
kind v0.22.0 go1.20.13 linux/amd64
kubectl get node
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 5d v1.22.17
kind-worker Ready <none> 4d23h v1.22.17
kind-worker2 Ready <none> 4d23h v1.22.17
Comment From: sys-liqian
I also encountered this problem.