Crash report Hi, we are useing redis of 4.0.2 version . After redis crash,i try to restart it, but fail. The details is below, thanks.

=== REDIS BUG REPORT START: Cut & paste starting from here ===
17523:M 29 Nov 09:09:56.275 # === ASSERTION FAILED OBJECT CONTEXT ===
17523:M 29 Nov 09:09:56.275 # Object type: 0
17523:M 29 Nov 09:09:56.275 # Object encoding: 0
17523:M 29 Nov 09:09:56.275 # Object refcount: 1
17523:M 29 Nov 09:09:56.275 # Object raw string len: 0
17523:M 29 Nov 09:09:56.275 # Object raw string content: ""
17523:M 29 Nov 09:09:56.275 # === ASSERTION FAILED ===
17523:M 29 Nov 09:09:56.275 # ==> db.c:164 'retval == DICT_OK' is not true
17523:M 29 Nov 09:09:56.275 # (forcing SIGSEGV to print the bug report.)
17523:M 29 Nov 09:09:56.275 # Redis 4.0.2 crashed by signal: 11
17523:M 29 Nov 09:09:56.275 # Accessing address: 0xffffffffffffffff
17523:M 29 Nov 09:09:56.275 # Failed assertion: retval == DICT_OK (db.c:164)

------ STACK TRACE ------
/home/coremail/libexec/redis-server 10.1.34.90:6379(logStackTrace+0x34)[0x4696f4]
/home/coremail/libexec/redis-server 10.1.34.90:6379(sigsegvHandler+0x88)[0x469dc8]
linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xfffe93580698]
/home/coremail/libexec/redis-server 10.1.34.90:6379(_serverAssert+0x7c)[0x467a8c]
/home/coremail/libexec/redis-server 10.1.34.90:6379(dbAdd+0xac)[0x44010c]
/home/coremail/libexec/redis-server 10.1.34.90:6379(rdbLoadRio+0x238)[0x44a6d0]
/home/coremail/libexec/redis-server 10.1.34.90:6379(rdbLoad+0x40)[0x44ac10]
/home/coremail/libexec/redis-server 10.1.34.90:6379(loadDataFromDisk+0x74)[0x42e604]
/home/coremail/libexec/redis-server 10.1.34.90:6379(main+0x444)[0x4224c4]
/lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe4)[0xfffe932e0da4]
/home/coremail/libexec/redis-server 10.1.34.90:6379[0x422828]

------ INFO OUTPUT ------
# Server
redis_version:4.0.2
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:e6b8ff705c37b8cc
redis_mode:standalone
os:Linux 4.19.0-arm64-server aarch64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:8.3.0
process_id:17523
run_id:d94ad11e9e6ab322823653422f909a39ec74bb7c
tcp_port:6379
uptime_in_seconds:0
uptime_in_days:0
hz:10
lru_clock:10758244
executable:/home/coremail/libexec/redis-server
config_file:/home/coremail/conf/redis.conf

# Clients
connected_clients:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:943872
used_memory_human:921.75K
used_memory_rss:0
used_memory_rss_human:0B
used_memory_peak:943872
used_memory_peak_human:921.75K
used_memory_peak_perc:inf%
used_memory_overhead:931064
used_memory_startup:798480
used_memory_dataset:12808
used_memory_dataset_perc:8.81%
total_system_memory:68545413120
total_system_memory_human:63.84G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:0.00
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0

# Persistence
loading:1
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1638148196
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
loading_start_time:1638148196
loading_total_bytes:3461936
loading_loaded_bytes:0
loading_loaded_perc:0.00
loading_eta_seconds:1

# Stats
total_connections_received:0
total_commands_processed:0
instantaneous_ops_per_sec:0
total_net_input_bytes:0
total_net_output_bytes:0
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

# Replication
role:master
connected_slaves:0
master_replid:63aef2c7d29f44608f87747464d143bfdadfab04
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:0.00
used_cpu_user:0.01
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

# Commandstats

# Cluster
cluster_enabled:0

# Keyspace
db0:keys=24,expires=23,avg_ttl=0

------ CLIENT LIST OUTPUT ------

------ REGISTERS ------

------ FAST MEMORY TEST ------
17523:M 29 Nov 09:09:56.275 # Bio thread for job type #0 terminated
17523:M 29 Nov 09:09:56.275 # Bio thread for job type #1 terminated
17523:M 29 Nov 09:09:56.275 # Bio thread for job type #2 terminated
*** Preparing to test memory region 550000 (131072 bytes)
*** Preparing to test memory region 10ba0000 (196608 bytes)
*** Preparing to test memory region fffe90be0000 (8388608 bytes)
*** Preparing to test memory region fffe913f0000 (8388608 bytes)
*** Preparing to test memory region fffe91c00000 (10485760 bytes)
*** Preparing to test memory region fffe93000000 (2097152 bytes)
*** Preparing to test memory region fffe93430000 (65536 bytes)
.O.O.O.O.O.O.O
Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.

=== REDIS BUG REPORT END. Make sure to include from START to END. ===

Comment From: oranagra

looks like the RDB contains two keys with the same name for some reason. do you have any idea how it happened? was this rdb file a result of simple use of redis commands? do have any clue what could be special in this case? maybe you can load this rdb file into redis-rdb-tools or alike and find out the name and type of that key.

Comment From: leoruns

sa already delete the rdb file,i also have no idea now.maybe it happen again,i can do according your suggestion. thanks for your reply.

Comment From: oranagra

ok, i'm closing this one for now, feel free to re-open if you'll find more info. p.s. i suggest to upgrade, you're using an old version, maybe this problem was already solved.