Hi is there any latest support for handling [ERR] Nodes don't agree about configuration! ? I found some blogs but mainly using redis-trib.rb, seems to be version before 5.0

And I could not found any systematic solution to handling this Error message.

Specifically, my error is following, but I think more general solution is needed. e.g, how to check, how to fix

redis-cli --cluster reshard 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
S: f2d58cc60ce527a72dbaeeb837109be6d3fdc0a2 127.0.0.1:6379
   slots: (0 slots) slave
   replicates 87e24b04ebf8a6e1c081a5eaf0e6d80603f10ddc
M: 87e24b04ebf8a6e1c081a5eaf0e6d80603f10ddc 172.168.3.117:6380
   slots:[5461-10922] (5462 slots) master
   5 additional replica(s)
S: ce9606da2d09d1009b3abe7efa90059cbece944f 172.168.3.115:6381
   slots: (0 slots) slave
   replicates 87e24b04ebf8a6e1c081a5eaf0e6d80603f10ddc
M: 320c95356460f14a140bca99eb96a432d7795236 127.0.0.1:6382
   slots: (0 slots) master
M: 08fdc1763a39ea4e60f2dfe8f15ed3a86b99c26e 172.168.3.115:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 8f14d2d0c261cd58d38a4250cffddb48063ea434 172.168.3.115:6380
   slots: (0 slots) slave
   replicates d15cd4b86557e4d4366f0795272830e562ded056
S: 1ae537ccd0e0df6d3af7e7a149ac675ab9345080 172.168.3.115:6382
   slots: (0 slots) slave
   replicates 87e24b04ebf8a6e1c081a5eaf0e6d80603f10ddc
M: 76069136914299a036f2dd5862ca7ca5a6ddda7f 127.0.0.1:6381
   slots: (0 slots) master
M: fdd6afe994eeb9a1f4a60525a4ad204684ead259 127.0.0.1:6385
   slots: (0 slots) master
S: 65d933a9b0d9bd4af0f8a3f85dbfad5915d646b3 172.168.3.115:6388
   slots: (0 slots) slave
   replicates 87e24b04ebf8a6e1c081a5eaf0e6d80603f10ddc
S: 6cfe026797da719b2118dc8340c2e524d69fae77 172.168.3.115:6385
   slots: (0 slots) slave
   replicates d15cd4b86557e4d4366f0795272830e562ded056
M: a3bf089264cd6877bf66d12dff33448147d951f8 127.0.0.1:6383
   slots: (0 slots) master
S: 03e64c0db72e16bd130e4daf91a1455f1428d692 172.168.3.115:6386
   slots: (0 slots) slave
   replicates 87e24b04ebf8a6e1c081a5eaf0e6d80603f10ddc
S: fdabea249fd757ef1894610938c446d097d36e77 172.168.3.116:6380
   slots: (0 slots) slave
   replicates 08fdc1763a39ea4e60f2dfe8f15ed3a86b99c26e
S: 36c6b22556009823eb61b9694e427d14d6c064b6 172.168.3.115:6384
   slots: (0 slots) slave
   replicates d15cd4b86557e4d4366f0795272830e562ded056
M: cf8b1ef456cb1ee88faee8ba81bb97c90f40258a 127.0.0.1:6384
   slots: (0 slots) master
S: 223f106fd5afdbd5b7845ed2f384d12b4982b86f 172.168.3.115:6387
   slots: (0 slots) slave
   replicates d15cd4b86557e4d4366f0795272830e562ded056
S: a21f1d285480035299a7eb01f86d673eb9011a08 172.168.3.115:6383
   slots: (0 slots) slave
   replicates d15cd4b86557e4d4366f0795272830e562ded056
M: d15cd4b86557e4d4366f0795272830e562ded056 172.168.3.117:6379
   slots:[10923-16383] (5461 slots) master
   5 additional replica(s)
[ERR] Nodes don't agree about configuration!
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
*** Please fix your cluster problems before resharding