When testing redis cluster, we triggered an assertion failure in a cluster node. This happens in both the latest release (7.0.11) and the unstable branch.

Log

4013349:C 05 Jun 2023 19:17:28.867 # WARNING: Changing databases number from 16 to 1 since we are in cluster mode
4013349:C 05 Jun 2023 19:17:28.867 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
4013349:C 05 Jun 2023 19:17:28.867 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
4013349:C 05 Jun 2023 19:17:28.867 * Redis version=255.255.255, bits=64, commit=0bd1a3a4, modified=0, pid=4013349, just started
4013349:C 05 Jun 2023 19:17:28.867 * Configuration loaded
4013349:M 05 Jun 2023 19:17:28.868 * Increased maximum number of open files to 10032 (it was originally set to 1024).
4013349:M 05 Jun 2023 19:17:28.868 * monotonic clock: POSIX clock_gettime
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 255.255.255 (0bd1a3a4/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in cluster mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 4013349
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           https://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

4013349:M 05 Jun 2023 19:17:28.868 * Node configuration loaded, I'm 98e8b3aab4861ee22fdf92972c206a8420287b69
4013349:M 05 Jun 2023 19:17:28.868 * Server initialized
4013349:M 05 Jun 2023 19:17:28.869 * Loading RDB produced by version 255.255.255
4013349:M 05 Jun 2023 19:17:28.869 * RDB age 3 seconds
4013349:M 05 Jun 2023 19:17:28.869 * RDB memory usage when created 1.56 Mb
4013349:M 05 Jun 2023 19:17:28.869 * Done loading RDB, keys loaded: 0, keys expired: 0.
4013349:M 05 Jun 2023 19:17:28.869 * DB loaded from disk: 0.000 seconds
4013349:M 05 Jun 2023 19:17:28.869 * Ready to accept connections tcp
4013349:M 05 Jun 2023 19:18:31.186 - Accepted 127.0.0.1:49084
4013349:M 05 Jun 2023 19:18:31.192 * configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH
4013349:M 05 Jun 2023 19:18:31.216 - Accepting cluster node connection from 127.0.0.1:34398
4013349:M 05 Jun 2023 19:18:31.216 * IP address for this node updated to 127.0.0.1
4013349:M 05 Jun 2023 19:18:31.252 - Accepting cluster node connection from 127.0.0.1:34404
4013349:M 05 Jun 2023 19:18:35.195 - Client closed connection id=3 addr=127.0.0.1:49084 laddr=127.0.0.1:6379 fd=11 name= age=4 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=0 qbuf-free=20474 argv-mem=0 multi-mem=0 rbs=1024 rbp=1024 obl=0 oll=0 omem=0 tot-mem=22400 events=r cmd=cluster|nodes user=default redir=-1 resp=2 lib-name= lib-ver=
4013349:M 05 Jun 2023 19:18:36.170 * Cluster state changed: ok
4013349:M 05 Jun 2023 19:18:43.847 - Accepted 127.0.0.1:52222
4013349:M 05 Jun 2023 19:18:43.847 # Cluster state changed: fail
4013349:M 05 Jun 2023 19:18:43.848 - Client closed connection id=4 addr=127.0.0.1:52222 laddr=127.0.0.1:6379 fd=11 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=0 qbuf-free=20474 argv-mem=0 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=37760 events=r cmd=cluster|flushslots user=default redir=-1 resp=2 lib-name= lib-ver=
4013349:M 05 Jun 2023 19:18:58.606 - Accepted 127.0.0.1:47218
4013349:S 05 Jun 2023 19:18:58.606 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
4013349:S 05 Jun 2023 19:18:58.606 * Connecting to MASTER 127.0.0.1:6381
4013349:S 05 Jun 2023 19:18:58.606 * MASTER <-> REPLICA sync started
4013349:S 05 Jun 2023 19:18:58.606 * Non blocking connect for SYNC fired the event.
4013349:S 05 Jun 2023 19:18:58.606 * Master replied to PING, replication can continue...
4013349:S 05 Jun 2023 19:18:58.606 * Trying a partial resynchronization (request 87fa56f55113dec207d1dbf0caf7b5ec93da853d:1).
4013349:S 05 Jun 2023 19:18:58.606 - Client closed connection id=5 addr=127.0.0.1:47218 laddr=127.0.0.1:6379 fd=11 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=0 qbuf-free=20474 argv-mem=0 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=37760 events=r cmd=cluster|replicate user=default redir=-1 resp=2 lib-name= lib-ver=
4013349:S 05 Jun 2023 19:19:03.878 * Full resync from master: b0d2ec9ec5e1a58ce258843c59f0f8ed63720f0a:14
4013349:S 05 Jun 2023 19:19:03.879 * MASTER <-> REPLICA sync: receiving streamed RDB from master with EOF to disk
4013349:S 05 Jun 2023 19:19:03.879 * Discarding previously cached master state.
4013349:S 05 Jun 2023 19:19:03.879 * MASTER <-> REPLICA sync: Flushing old data
4013349:S 05 Jun 2023 19:19:03.879 * MASTER <-> REPLICA sync: Loading DB in memory
4013349:S 05 Jun 2023 19:19:03.880 * Loading RDB produced by version 255.255.255
4013349:S 05 Jun 2023 19:19:03.880 * RDB age 0 seconds
4013349:S 05 Jun 2023 19:19:03.880 * RDB memory usage when created 1.78 Mb
4013349:S 05 Jun 2023 19:19:03.880 * Done loading RDB, keys loaded: 0, keys expired: 0.
4013349:S 05 Jun 2023 19:19:03.880 * MASTER <-> REPLICA sync: Finished with success
4013349:S 05 Jun 2023 19:19:23.106 - Accepted 127.0.0.1:39290
4013349:S 05 Jun 2023 19:19:23.107 - Client closed connection id=8 addr=127.0.0.1:39290 laddr=127.0.0.1:6379 fd=17 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=0 qbuf-free=20474 argv-mem=0 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=37760 events=r cmd=cluster|addslots user=default redir=-1 resp=2 lib-name= lib-ver=
4013349:S 05 Jun 2023 19:19:37.890 - Accepted 127.0.0.1:55736


=== REDIS BUG REPORT START: Cut & paste starting from here ===
4013349:S 05 Jun 2023 19:19:37.890 # === ASSERTION FAILED ===
4013349:S 05 Jun 2023 19:19:37.890 # ==> cluster.c:4931 'myself->numslots == 0' is not true

------ STACK TRACE ------

Backtrace:
../../redis-server *:6379 [cluster](+0x13755b)[0x555b25be055b]
../../redis-server *:6379 [cluster](clusterCommand+0xf67)[0x555b25be5ee7]
../../redis-server *:6379 [cluster](call+0x186)[0x555b25b40bb6]
../../redis-server *:6379 [cluster](processCommand+0xba9)[0x555b25b42149]
../../redis-server *:6379 [cluster](processInputBuffer+0x107)[0x555b25b67f27]
../../redis-server *:6379 [cluster](readQueryFromClient+0x368)[0x555b25b684a8]
../../redis-server *:6379 [cluster](+0x1c11ac)[0x555b25c6a1ac]
../../redis-server *:6379 [cluster](aeMain+0xf9)[0x555b25b368d9]
../../redis-server *:6379 [cluster](main+0x3df)[0x555b25b2afbf]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f843b1d7d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f843b1d7e40]
../../redis-server *:6379 [cluster](_start+0x25)[0x555b25b2b785]

------ INFO OUTPUT ------
# Server
redis_version:255.255.255
redis_git_sha1:0bd1a3a4
redis_git_dirty:0
redis_build_id:b5019077ff9fccd6
redis_mode:cluster
os:Linux 5.15.0-47-generic x86_64
arch_bits:64
monotonic_clock:POSIX clock_gettime
multiplexing_api:epoll
atomicvar_api:c11-builtin
gcc_version:11.3.0
process_id:4013349
process_supervised:no
run_id:1a181acb00bab7e09911f945efb86254322d9dfa
tcp_port:6379
server_time_usec:1685992777890620
uptime_in_seconds:129
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:8271177
executable:/home/congyu/redis/src/redis-server
config_file:
io_threads_active:0
listener0:name=tcp,bind=*,bind=-::*,port=6379

# Clients
connected_clients:2
cluster_connections:4
maxclients:10000
client_recent_max_input_buffer:24
client_recent_max_output_buffer:0
blocked_clients:0
tracking_clients:0
clients_in_timeout_table:0
total_blocking_keys:0
total_blocking_keys_on_nokey:0

# Memory
used_memory:1854360
used_memory_human:1.77M
used_memory_rss:8544256
used_memory_rss_human:8.15M
used_memory_peak:1900440
used_memory_peak_human:1.81M
used_memory_peak_perc:97.58%
used_memory_overhead:1609404
used_memory_startup:1582480
used_memory_dataset:244956
used_memory_dataset_perc:90.10%
allocator_allocated:2138096
allocator_active:2600960
allocator_resident:12455936
total_system_memory:134750920704
total_system_memory_human:125.50G
used_memory_lua:31744
used_memory_vm_eval:31744
used_memory_lua_human:31.00K
used_memory_scripts_eval:0
number_of_cached_scripts:0
number_of_functions:0
number_of_libraries:0
used_memory_vm_functions:32768
used_memory_vm_total:64512
used_memory_vm_total_human:63.00K
used_memory_functions:184
used_memory_scripts:184
used_memory_scripts_human:184B
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.22
allocator_frag_bytes:462864
allocator_rss_ratio:4.79
allocator_rss_bytes:9854976
rss_overhead_ratio:0.69
rss_overhead_bytes:-3911680
mem_fragmentation_ratio:4.71
mem_fragmentation_bytes:6730152
mem_not_counted_for_evict:0
mem_replication_backlog:20508
mem_total_replication_buffers:20504
mem_clients_slaves:0
mem_clients_normal:1944
mem_cluster_links:4288
mem_aof_buffer:0
mem_allocator:jemalloc-5.3.0
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0

# Persistence
loading:0
async_loading:0
current_cow_peak:0
current_cow_size:0
current_cow_size_age:0
current_fork_perc:0.00
current_save_keys_processed:0
current_save_keys_total:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1685992648
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_saves:0
rdb_last_cow_size:0
rdb_last_load_keys_expired:0
rdb_last_load_keys_loaded:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_rewrites:0
aof_rewrites_consecutive_failures:0
aof_last_write_status:ok
aof_last_cow_size:0
module_fork_in_progress:0
module_fork_last_cow_size:0

# Stats
total_connections_received:5
total_commands_processed:19
instantaneous_ops_per_sec:0
total_net_input_bytes:54382
total_net_output_bytes:19564
total_net_repl_input_bytes:306
total_net_repl_output_bytes:0
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.02
instantaneous_input_repl_kbps:0.00
instantaneous_output_repl_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:0
evicted_keys:0
evicted_clients:0
total_eviction_exceeded_time:0
current_eviction_exceeded_time:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
pubsubshard_channels:0
latest_fork_usec:0
total_forks:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
total_active_defrag_time:0
current_active_defrag_time:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:0
dump_payload_sanitizations:0
total_reads_processed:26
total_writes_processed:51
io_threaded_reads_processed:0
io_threaded_writes_processed:0
reply_buffer_shrinks:2
reply_buffer_expands:0
eventloop_cycles:1601
eventloop_duration_sum:59284
eventloop_duration_cmd_sum:1109
instantaneous_eventloop_cycles_per_sec:12
instantaneous_eventloop_duration_usec:33
acl_access_denied_auth:0
acl_access_denied_cmd:0
acl_access_denied_key:0
acl_access_denied_channel:0

# Replication
role:slave
master_host:127.0.0.1
master_port:6381
master_link_status:up
master_last_io_seconds_ago:5
master_sync_in_progress:0
slave_read_repl_offset:56
slave_repl_offset:56
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:b0d2ec9ec5e1a58ce258843c59f0f8ed63720f0a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:56
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:15
repl_backlog_histlen:42

# CPU
used_cpu_sys:0.032695
used_cpu_user:0.033974
used_cpu_sys_children:0.000000
used_cpu_user_children:0.000000
used_cpu_sys_main_thread:0.040645
used_cpu_user_main_thread:0.025403

# Modules

# Commandstats
cmdstat_ping:calls=3,usec=0,usec_per_call=0.00,rejected_calls=0,failed_calls=0
cmdstat_info:calls=3,usec=134,usec_per_call=44.67,rejected_calls=0,failed_calls=0
cmdstat_cluster|addslots:calls=2,usec=160,usec_per_call=80.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|info:calls=1,usec=8,usec_per_call=8.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|flushslots:calls=1,usec=492,usec_per_call=492.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|nodes:calls=7,usec=211,usec_per_call=30.14,rejected_calls=0,failed_calls=0
cmdstat_cluster|replicate:calls=1,usec=94,usec_per_call=94.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|set-config-epoch:calls=1,usec=10,usec_per_call=10.00,rejected_calls=0,failed_calls=0

# Errorstats

# Latencystats
latency_percentiles_usec_ping:p50=0.001,p99=0.001,p99.9=0.001
latency_percentiles_usec_info:p50=38.143,p99=72.191,p99.9=72.191
latency_percentiles_usec_cluster|addslots:p50=15.039,p99=145.407,p99.9=145.407
latency_percentiles_usec_cluster|info:p50=8.031,p99=8.031,p99.9=8.031
latency_percentiles_usec_cluster|flushslots:p50=493.567,p99=493.567,p99.9=493.567
latency_percentiles_usec_cluster|nodes:p50=28.031,p99=38.143,p99.9=38.143
latency_percentiles_usec_cluster|replicate:p50=94.207,p99=94.207,p99.9=94.207
latency_percentiles_usec_cluster|set-config-epoch:p50=10.047,p99=10.047,p99.9=10.047

# Cluster
cluster_enabled:1

# Keyspace

# Cluster info
cluster_state:fail
cluster_slots_assigned:10924
cluster_slots_ok:10924
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:3
cluster_size:2
cluster_current_epoch:3
cluster_my_epoch:3
cluster_stats_messages_ping_sent:64
cluster_stats_messages_pong_sent:70
cluster_stats_messages_sent:134
cluster_stats_messages_ping_received:68
cluster_stats_messages_pong_received:64
cluster_stats_messages_meet_received:2
cluster_stats_messages_received:134
total_cluster_links_buffer_limit_exceeded:0

------ CLUSTER NODES OUTPUT ------
a4d687936a05fabea0798f35c494390af1fd6a24 127.0.0.1:6380@16380,,shard-id=9a86af15129537b02350ee8bfe72ae2c040fd2d1 master - 0 1685992775976 2 connected 5461-10922
cab2ad6ff2a6f611b7e9873c3cafa455bd951222 127.0.0.1:6381@16381,,shard-id=63204e65945e016afa82fc37d12a465affe6b5b5 master - 0 1685992776978 3 connected 10923-16383
98e8b3aab4861ee22fdf92972c206a8420287b69 127.0.0.1:6379@16379,,shard-id=63204e65945e016afa82fc37d12a465affe6b5b5 myself,slave cab2ad6ff2a6f611b7e9873c3cafa455bd951222 0 1685992776000 3 connected 1

------ CLIENT LIST OUTPUT ------
id=7 addr=127.0.0.1:6381 laddr=127.0.0.1:42290 fd=16 name= age=34 idle=5 flags=M db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=0 qbuf-free=14 argv-mem=0 multi-mem=0 rbs=1024 rbp=35 obl=0 oll=0 omem=0 tot-mem=1944 events=r cmd=ping user=(superuser) redir=-1 resp=2 lib-name= lib-ver=
id=9 addr=127.0.0.1:55736 laddr=127.0.0.1:6379 fd=17 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=79 qbuf-free=20395 argv-mem=56 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=37840 events=r cmd=cluster|replicate user=default redir=-1 resp=2 lib-name= lib-ver=

------ CURRENT CLIENT INFO ------
id=9 addr=127.0.0.1:55736 laddr=127.0.0.1:6379 fd=17 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=79 qbuf-free=20395 argv-mem=56 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=37840 events=r cmd=cluster|replicate user=default redir=-1 resp=2 lib-name= lib-ver=
argc: '3'
argv[0]: '"CLUSTER"'
argv[1]: '"REPLICATE"'
argv[2]: '"a4d687936a05fabea0798f35c494390af1fd6a24"'

------ EXECUTING CLIENT INFO ------
id=9 addr=127.0.0.1:55736 laddr=127.0.0.1:6379 fd=17 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=79 qbuf-free=20395 argv-mem=56 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=37840 events=r cmd=cluster|replicate user=default redir=-1 resp=2 lib-name= lib-ver=
argc: '3'
argv[0]: '"CLUSTER"'
argv[1]: '"REPLICATE"'
argv[2]: '"a4d687936a05fabea0798f35c494390af1fd6a24"'

------ MODULES INFO OUTPUT ------

------ CONFIG DEBUG OUTPUT ------
repl-diskless-load disabled
lazyfree-lazy-user-flush no
io-threads-do-reads no
proto-max-bulk-len 512mb
replica-read-only yes
lazyfree-lazy-server-del no
sanitize-dump-payload no
lazyfree-lazy-user-del no
client-query-buffer-limit 1gb
lazyfree-lazy-expire no
io-threads 1
repl-diskless-sync yes
lazyfree-lazy-eviction no
activedefrag no
slave-read-only yes
list-compress-depth 0

------ FAST MEMORY TEST ------
4013349:S 05 Jun 2023 19:19:37.891 # Bio worker thread #0 terminated
4013349:S 05 Jun 2023 19:19:37.891 # Bio worker thread #1 terminated
4013349:S 05 Jun 2023 19:19:37.891 # Bio worker thread #2 terminated
*** Preparing to test memory region 555b25e26000 (2269184 bytes)
*** Preparing to test memory region 555b276d4000 (135168 bytes)
*** Preparing to test memory region 7f8430000000 (135168 bytes)
*** Preparing to test memory region 7f8437000000 (8388608 bytes)
*** Preparing to test memory region 7f8437800000 (2097152 bytes)
*** Preparing to test memory region 7f8437c00000 (8388608 bytes)
*** Preparing to test memory region 7f8438400000 (6291456 bytes)
*** Preparing to test memory region 7f8438a15000 (8388608 bytes)
*** Preparing to test memory region 7f8439216000 (8388608 bytes)
*** Preparing to test memory region 7f8439a17000 (8388608 bytes)
*** Preparing to test memory region 7f843a217000 (3145728 bytes)
*** Preparing to test memory region 7f843a800000 (8388608 bytes)
*** Preparing to test memory region 7f843b1ab000 (12288 bytes)
*** Preparing to test memory region 7f843b3c9000 (53248 bytes)
*** Preparing to test memory region 7f843b4cc000 (8192 bytes)
.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O
Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.

=== REDIS BUG REPORT END. Make sure to include from START to END. ===

       Please report the crash by opening an issue on github:

           http://github.com/redis/redis/issues

  If a Redis module was involved, please open in the module's repo instead.

  Suspect RAM error? Use redis-server --test-memory to verify it.

  Some other issues could be detected by redis-server --check-system
Aborted (core dumped)

Reproduce

First start a redis cluster with 3 nodes with options --protected-mode no --cluster-enabled yes --loglevel verbose --port .... Then run following commands. Here I started three nodes locally with port 6379, 6380 and 6381. Node 6379 crashes eventually.

$./redis-cli --cluster-yes --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: 98e8b3aab4861ee22fdf92972c206a8420287b69 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
M: a4d687936a05fabea0798f35c494390af1fd6a24 127.0.0.1:6380
   slots:[5461-10922] (5462 slots) master
M: cab2ad6ff2a6f611b7e9873c3cafa455bd951222 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 98e8b3aab4861ee22fdf92972c206a8420287b69 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
M: a4d687936a05fabea0798f35c494390af1fd6a24 127.0.0.1:6380
   slots:[5461-10922] (5462 slots) master
M: cab2ad6ff2a6f611b7e9873c3cafa455bd951222 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
$ ./redis-cli -p 6379 -c CLUSTER FLUSHSLOTS
OK
$ ./redis-cli -p 6381 -c CLUSTER MYID
"cab2ad6ff2a6f611b7e9873c3cafa455bd951222"
$ ./redis-cli -p 6379 -c CLUSTER REPLICATE cab2ad6ff2a6f611b7e9873c3cafa455bd951222
OK
$ ./redis-cli -p 6379 -c CLUSTER ADDSLOTS 1
OK
$ ./redis-cli -p 6380 -c CLUSTER MYID
"a4d687936a05fabea0798f35c494390af1fd6a24"
$ ./redis-cli -p 6379 -c CLUSTER REPLICATE a4d687936a05fabea0798f35c494390af1fd6a24
Error: Server closed the connection

Comment From: hwware

Hello @Congyu-Liu , I tried to reproduce the issue in latest redis version, but I cannot see any crash,In fact redis is displying the correct error msg. Redis [CRASH] Redis cluster node crashes with assertion failure 'myself->numslots == 0' is not true

Is there is something else , you were doing along with this steps.? Could you share more info.

Comment From: Congyu-Liu

Hi @hwware. It seems that you miss a CLUSTER REPLICATE command before CLUSTER ADDSLOTS. Also some commands are not sent to the correct node.

In short, here is what the input does: 1. Flush slots on node 0 2. Let node 0 replicate node 1 3. Add slot 1 in node 0 4. Let node 0 replicate node 2

I just added some comments in the instruction. Hopefully it can be helpful.

# CLUSTER FLUSHSLOTS send to node 0
$ ./redis-cli -p 6379 -c CLUSTER FLUSHSLOTS
OK
# CLUSTER MYID send to node 2
$ ./redis-cli -p 6381 -c CLUSTER MYID
"cab2ad6ff2a6f611b7e9873c3cafa455bd951222"
# CLUSTER REPLICATE send to node 0
$ ./redis-cli -p 6379 -c CLUSTER REPLICATE cab2ad6ff2a6f611b7e9873c3cafa455bd951222
OK
# CLUSTER ADDSLOTS 1 send to node 0
$ ./redis-cli -p 6379 -c CLUSTER ADDSLOTS 1
OK
# CLUSTER MYID send to node 1
$ ./redis-cli -p 6380 -c CLUSTER MYID
"a4d687936a05fabea0798f35c494390af1fd6a24"
# CLUSTER REPLICATE send to node 0
$ ./redis-cli -p 6379 -c CLUSTER REPLICATE a4d687936a05fabea0798f35c494390af1fd6a24
Error: Server closed the connection

Comment From: enjoy-binbin

thanks for the report, i reproduced it and located the problem, we will fix it maybe an overlook in https://github.com/redis/redis/commit/ac3850cabd3944c06a07ece83ad44f3dc6ad50c3