Linux Version: CentOS Linux release 7.9.2009 (AltArch) - ppc64 Redis Version: 5.0.9 Jemalloc Version: 5.2.1 gcc Version: 4.8.5 20150623 (Red Hat 4.8.5-44) Executed Command: make test & ./runtest --clients 1
lscpu output:
Architecture: ppc64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Big Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 4
Core(s) per socket: 1
Socket(s): 8
NUMA node(s): 1
Model: 2.3 (pvr 003f 0203)
Model name: POWER7 (architected), altivec supported
Hypervisor vendor: pHyp
Virtualization type: para
L1d cache: 32K
L1i cache: 32K
NUMA node0 CPU(s): 0-31
free -m output:
total used free shared buff/cache available
Mem: 9802 937 2710 12 6154 8230
Swap: 4095 0 4095
./runtest --clients 1 output:
Cleanup: may take some time... OK
Starting test server at port 11111
[ready]: 64621
[1;37;49mTesting unit/printver[0m
Testing Redis version 5.0.9 (00000000)
[1/50 [0;33;49mdone[0m]: unit/printver (1 seconds)
[1;37;49mTesting unit/dump[0m
[[0;32;49mok[0m]: DUMP / RESTORE are able to serialize / unserialize a simple key
[[0;32;49mok[0m]: RESTORE can set an arbitrary expire to the materialized key
[[0;32;49mok[0m]: RESTORE can set an expire that overflows a 32 bit integer
[[0;32;49mok[0m]: RESTORE can set an absolute expire
[[0;32;49mok[0m]: RESTORE can set LRU
[[0;32;49mok[0m]: RESTORE can set LFU
[[0;32;49mok[0m]: RESTORE returns an error of the key already exists
[[0;32;49mok[0m]: RESTORE can overwrite an existing key with REPLACE
[[0;32;49mok[0m]: RESTORE can detect a syntax error for unrecongized options
[[0;32;49mok[0m]: DUMP of non existing key returns nil
[[0;32;49mok[0m]: MIGRATE is caching connections
[[0;32;49mok[0m]: MIGRATE cached connections are released after some time
[[0;32;49mok[0m]: MIGRATE is able to migrate a key between two instances
[[0;32;49mok[0m]: MIGRATE is able to copy a key between two instances
[[0;32;49mok[0m]: MIGRATE will not overwrite existing keys, unless REPLACE is used
[[0;32;49mok[0m]: MIGRATE propagates TTL correctly
[[0;32;49mok[0m]: MIGRATE can correctly transfer large values
[[0;32;49mok[0m]: MIGRATE can correctly transfer hashes
[[0;32;49mok[0m]: MIGRATE timeout actually works
[[0;32;49mok[0m]: MIGRATE can migrate multiple keys at once
[[0;32;49mok[0m]: MIGRATE with multiple keys must have empty key arg
[[0;32;49mok[0m]: MIGRATE with multiple keys migrate just existing ones
[[0;32;49mok[0m]: MIGRATE with multiple keys: stress command rewriting
[[0;32;49mok[0m]: MIGRATE with multiple keys: delete just ack keys
[[0;32;49mok[0m]: MIGRATE AUTH: correct and wrong password cases
[2/50 [0;33;49mdone[0m]: unit/dump (27 seconds)
[1;37;49mTesting unit/auth[0m
[[0;32;49mok[0m]: AUTH fails if there is no password configured server side
[[0;32;49mok[0m]: AUTH fails when a wrong password is given
[[0;32;49mok[0m]: Arbitrary command gives an error when AUTH is required
[[0;32;49mok[0m]: AUTH succeeds when the right password is given
[[0;32;49mok[0m]: Once AUTH succeeded we can actually send commands to the server
[3/50 [0;33;49mdone[0m]: unit/auth (1 seconds)
[1;37;49mTesting unit/protocol[0m
[[0;32;49mok[0m]: Handle an empty query
[[0;32;49mok[0m]: Negative multibulk length
[[0;32;49mok[0m]: Out of range multibulk length
[[0;32;49mok[0m]: Wrong multibulk payload header
[[0;32;49mok[0m]: Negative multibulk payload length
[[0;32;49mok[0m]: Out of range multibulk payload length
[[0;32;49mok[0m]: Non-number multibulk payload length
[[0;32;49mok[0m]: Multi bulk request not followed by bulk arguments
[[0;32;49mok[0m]: Generic wrong number of args
[[0;32;49mok[0m]: Unbalanced number of quotes
[[0;32;49mok[0m]: Protocol desync regression test #1
[[0;32;49mok[0m]: Protocol desync regression test #2
[[0;32;49mok[0m]: Protocol desync regression test #3
[[0;32;49mok[0m]: Regression for a crash with blocking ops and pipelining
[4/50 [0;33;49mdone[0m]: unit/protocol (0 seconds)
[1;37;49mTesting unit/keyspace[0m
[[0;32;49mok[0m]: DEL against a single item
[[0;32;49mok[0m]: Vararg DEL
[[0;32;49mok[0m]: KEYS with pattern
[[0;32;49mok[0m]: KEYS to get all keys
[[0;32;49mok[0m]: DBSIZE
[[0;32;49mok[0m]: DEL all keys
[[0;32;49mok[0m]: DEL against expired key
[[0;32;49mok[0m]: EXISTS
[[0;32;49mok[0m]: Zero length value in key. SET/GET/EXISTS
[[0;32;49mok[0m]: Commands pipelining
[[0;32;49mok[0m]: Non existing command
[[0;32;49mok[0m]: RENAME basic usage
[[0;32;49mok[0m]: RENAME source key should no longer exist
[[0;32;49mok[0m]: RENAME against already existing key
[[0;32;49mok[0m]: RENAMENX basic usage
[[0;32;49mok[0m]: RENAMENX against already existing key
[[0;32;49mok[0m]: RENAMENX against already existing key (2)
[[0;32;49mok[0m]: RENAME against non existing source key
[[0;32;49mok[0m]: RENAME where source and dest key are the same (existing)
[[0;32;49mok[0m]: RENAMENX where source and dest key are the same (existing)
[[0;32;49mok[0m]: RENAME where source and dest key are the same (non existing)
[[0;32;49mok[0m]: RENAME with volatile key, should move the TTL as well
[[0;32;49mok[0m]: RENAME with volatile key, should not inherit TTL of target key
[[0;32;49mok[0m]: DEL all keys again (DB 0)
[[0;32;49mok[0m]: DEL all keys again (DB 1)
[[0;32;49mok[0m]: MOVE basic usage
[[0;32;49mok[0m]: MOVE against key existing in the target DB
[[0;32;49mok[0m]: MOVE against non-integer DB (#1428)
[[0;32;49mok[0m]: MOVE can move key expire metadata as well
[[0;32;49mok[0m]: MOVE does not create an expire if it does not exist
[[0;32;49mok[0m]: SET/GET keys in different DBs
[[0;32;49mok[0m]: RANDOMKEY
[[0;32;49mok[0m]: RANDOMKEY against empty DB
[[0;32;49mok[0m]: RANDOMKEY regression 1
[[0;32;49mok[0m]: KEYS * two times with long key, Github issue #1208
[5/50 [0;33;49mdone[0m]: unit/keyspace (2 seconds)
[1;37;49mTesting unit/scan[0m
[[0;32;49mok[0m]: SCAN basic
[[0;32;49mok[0m]: SCAN COUNT
[[0;32;49mok[0m]: SCAN MATCH
[[0;32;49mok[0m]: SSCAN with encoding intset
[[0;32;49mok[0m]: SSCAN with encoding hashtable
[[0;32;49mok[0m]: HSCAN with encoding ziplist
[[0;32;49mok[0m]: HSCAN with encoding hashtable
[[0;32;49mok[0m]: ZSCAN with encoding ziplist
[[0;32;49mok[0m]: ZSCAN with encoding skiplist
[[0;32;49mok[0m]: SCAN guarantees check under write load
[[0;32;49mok[0m]: SSCAN with integer encoded object (issue #1345)
[[0;32;49mok[0m]: SSCAN with PATTERN
[[0;32;49mok[0m]: HSCAN with PATTERN
[[0;32;49mok[0m]: ZSCAN with PATTERN
[[0;32;49mok[0m]: ZSCAN scores: regression test for issue #2175
[[0;32;49mok[0m]: SCAN regression test for issue #4906
[6/50 [0;33;49mdone[0m]: unit/scan (8 seconds)
[1;37;49mTesting unit/type/string[0m
[[0;32;49mok[0m]: SET and GET an item
[[0;32;49mok[0m]: SET and GET an empty item
[[0;32;49mok[0m]: Very big payload in GET/SET
[[0;32;49mok[0m]: Very big payload random access
[[0;32;49mok[0m]: SET 10000 numeric keys and access all them in reverse order
[[0;32;49mok[0m]: DBSIZE should be 10000 now
[[0;32;49mok[0m]: SETNX target key missing
[[0;32;49mok[0m]: SETNX target key exists
[[0;32;49mok[0m]: SETNX against not-expired volatile key
[[0;32;49mok[0m]: SETNX against expired volatile key
[[0;32;49mok[0m]: MGET
[[0;32;49mok[0m]: MGET against non existing key
[[0;32;49mok[0m]: MGET against non-string key
[[0;32;49mok[0m]: GETSET (set new value)
[[0;32;49mok[0m]: GETSET (replace old value)
[[0;32;49mok[0m]: MSET base case
[[0;32;49mok[0m]: MSET wrong number of args
[[0;32;49mok[0m]: MSETNX with already existent key
[[0;32;49mok[0m]: MSETNX with not existing keys
[[0;32;49mok[0m]: STRLEN against non-existing key
[[0;32;49mok[0m]: STRLEN against integer-encoded value
[[0;32;49mok[0m]: STRLEN against plain string
[[0;32;49mok[0m]: SETBIT against non-existing key
[[0;32;49mok[0m]: SETBIT against string-encoded key
[[0;32;49mok[0m]: SETBIT against integer-encoded key
[[0;32;49mok[0m]: SETBIT against key with wrong type
[[0;32;49mok[0m]: SETBIT with out of range bit offset
[[0;32;49mok[0m]: SETBIT with non-bit argument
[[0;32;49mok[0m]: SETBIT fuzzing
[[0;32;49mok[0m]: GETBIT against non-existing key
[[0;32;49mok[0m]: GETBIT against string-encoded key
[[0;32;49mok[0m]: GETBIT against integer-encoded key
[[0;32;49mok[0m]: SETRANGE against non-existing key
[[0;32;49mok[0m]: SETRANGE against string-encoded key
[[0;32;49mok[0m]: SETRANGE against integer-encoded key
[[0;32;49mok[0m]: SETRANGE against key with wrong type
[[0;32;49mok[0m]: SETRANGE with out of range offset
[[0;32;49mok[0m]: GETRANGE against non-existing key
[[0;32;49mok[0m]: GETRANGE against string value
[[0;32;49mok[0m]: GETRANGE against integer-encoded value
[[0;32;49mok[0m]: GETRANGE fuzzing
[[0;32;49mok[0m]: Extended SET can detect syntax errors
[[0;32;49mok[0m]: Extended SET NX option
[[0;32;49mok[0m]: Extended SET XX option
[[0;32;49mok[0m]: Extended SET EX option
[[0;32;49mok[0m]: Extended SET PX option
[[0;32;49mok[0m]: Extended SET using multiple options at once
[[0;32;49mok[0m]: GETRANGE with huge ranges, Github issue #1844
[7/50 [0;33;49mdone[0m]: unit/type/string (12 seconds)
[1;37;49mTesting unit/type/incr[0m
[[0;32;49mok[0m]: INCR against non existing key
[[0;32;49mok[0m]: INCR against key created by incr itself
[[0;32;49mok[0m]: INCR against key originally set with SET
[[0;32;49mok[0m]: INCR over 32bit value
[[0;32;49mok[0m]: INCRBY over 32bit value with over 32bit increment
[[0;32;49mok[0m]: INCR fails against key with spaces (left)
[[0;32;49mok[0m]: INCR fails against key with spaces (right)
[[0;32;49mok[0m]: INCR fails against key with spaces (both)
[[0;32;49mok[0m]: INCR fails against a key holding a list
[[0;32;49mok[0m]: DECRBY over 32bit value with over 32bit increment, negative res
[[0;32;49mok[0m]: INCR uses shared objects in the 0-9999 range
[[0;32;49mok[0m]: INCR can modify objects in-place
[[0;32;49mok[0m]: INCRBYFLOAT against non existing key
[[0;32;49mok[0m]: INCRBYFLOAT against key originally set with SET
[[0;32;49mok[0m]: INCRBYFLOAT over 32bit value
[[0;32;49mok[0m]: INCRBYFLOAT over 32bit value with over 32bit increment
[[0;32;49mok[0m]: INCRBYFLOAT fails against key with spaces (left)
[[0;32;49mok[0m]: INCRBYFLOAT fails against key with spaces (right)
[[0;32;49mok[0m]: INCRBYFLOAT fails against key with spaces (both)
[[0;32;49mok[0m]: INCRBYFLOAT fails against a key holding a list
[[0;32;49mok[0m]: INCRBYFLOAT does not allow NaN or Infinity
[[0;32;49mok[0m]: INCRBYFLOAT decrement
[[0;32;49mok[0m]: string to double with null terminator
[8/50 [0;33;49mdone[0m]: unit/type/incr (0 seconds)
[1;37;49mTesting unit/type/list[0m
[[0;32;49mok[0m]: LPUSH, RPUSH, LLENGTH, LINDEX, LPOP - ziplist
[[0;32;49mok[0m]: LPUSH, RPUSH, LLENGTH, LINDEX, LPOP - regular list
[[0;32;49mok[0m]: R/LPOP against empty list
[[0;32;49mok[0m]: Variadic RPUSH/LPUSH
[[0;32;49mok[0m]: DEL a list
[[0;32;49mok[0m]: BLPOP, BRPOP: single existing list - linkedlist
[[0;32;49mok[0m]: BLPOP, BRPOP: multiple existing lists - linkedlist
[[0;32;49mok[0m]: BLPOP, BRPOP: second list has an entry - linkedlist
[[0;32;49mok[0m]: BRPOPLPUSH - linkedlist
[[0;32;49mok[0m]: BLPOP, BRPOP: single existing list - ziplist
[[0;32;49mok[0m]: BLPOP, BRPOP: multiple existing lists - ziplist
[[0;32;49mok[0m]: BLPOP, BRPOP: second list has an entry - ziplist
[[0;32;49mok[0m]: BRPOPLPUSH - ziplist
[[0;32;49mok[0m]: BLPOP, LPUSH + DEL should not awake blocked client
[[0;32;49mok[0m]: BLPOP, LPUSH + DEL + SET should not awake blocked client
[[0;32;49mok[0m]: BLPOP with same key multiple times should work (issue #801)
[[0;32;49mok[0m]: MULTI/EXEC is isolated from the point of view of BLPOP
[[0;32;49mok[0m]: BLPOP with variadic LPUSH
[[0;32;49mok[0m]: BRPOPLPUSH with zero timeout should block indefinitely
[[0;32;49mok[0m]: BRPOPLPUSH with a client BLPOPing the target list
[[0;32;49mok[0m]: BRPOPLPUSH with wrong source type
[[0;32;49mok[0m]: BRPOPLPUSH with wrong destination type
[[0;32;49mok[0m]: BRPOPLPUSH maintains order of elements after failure
[[0;32;49mok[0m]: BRPOPLPUSH with multiple blocked clients
[[0;32;49mok[0m]: Linked BRPOPLPUSH
[[0;32;49mok[0m]: Circular BRPOPLPUSH
[[0;32;49mok[0m]: Self-referential BRPOPLPUSH
[[0;32;49mok[0m]: BRPOPLPUSH inside a transaction
[[0;32;49mok[0m]: PUSH resulting from BRPOPLPUSH affect WATCH
[[0;32;49mok[0m]: BRPOPLPUSH does not affect WATCH while still blocked
[[0;32;49mok[0m]: BRPOPLPUSH timeout
[[0;32;49mok[0m]: BLPOP when new key is moved into place
[[0;32;49mok[0m]: BLPOP when result key is created by SORT..STORE
[[0;32;49mok[0m]: BLPOP: with single empty list argument
[[0;32;49mok[0m]: BLPOP: with negative timeout
[[0;32;49mok[0m]: BLPOP: with non-integer timeout
[[0;32;49mok[0m]: BLPOP: with zero timeout should block indefinitely
[[0;32;49mok[0m]: BLPOP: second argument is not a list
[[0;32;49mok[0m]: BLPOP: timeout
[[0;32;49mok[0m]: BLPOP: arguments are empty
[[0;32;49mok[0m]: BRPOP: with single empty list argument
[[0;32;49mok[0m]: BRPOP: with negative timeout
[[0;32;49mok[0m]: BRPOP: with non-integer timeout
[[0;32;49mok[0m]: BRPOP: with zero timeout should block indefinitely
[[0;32;49mok[0m]: BRPOP: second argument is not a list
[[0;32;49mok[0m]: BRPOP: timeout
[[0;32;49mok[0m]: BRPOP: arguments are empty
[[0;32;49mok[0m]: BLPOP inside a transaction
[[0;32;49mok[0m]: LPUSHX, RPUSHX - generic
[[0;32;49mok[0m]: LPUSHX, RPUSHX - linkedlist
[[0;32;49mok[0m]: LINSERT - linkedlist
[[0;32;49mok[0m]: LPUSHX, RPUSHX - ziplist
[[0;32;49mok[0m]: LINSERT - ziplist
[[0;32;49mok[0m]: LINSERT raise error on bad syntax
[[0;32;49mok[0m]: LINDEX consistency test - quicklist
[[0;32;49mok[0m]: LINDEX random access - quicklist
[[0;32;49mok[0m]: Check if list is still ok after a DEBUG RELOAD - quicklist
[[0;32;49mok[0m]: LINDEX consistency test - quicklist
[[0;32;49mok[0m]: LINDEX random access - quicklist
[[0;32;49mok[0m]: Check if list is still ok after a DEBUG RELOAD - quicklist
[[0;32;49mok[0m]: LLEN against non-list value error
[[0;32;49mok[0m]: LLEN against non existing key
[[0;32;49mok[0m]: LINDEX against non-list value error
[[0;32;49mok[0m]: LINDEX against non existing key
[[0;32;49mok[0m]: LPUSH against non-list value error
[[0;32;49mok[0m]: RPUSH against non-list value error
[[0;32;49mok[0m]: RPOPLPUSH base case - linkedlist
[[0;32;49mok[0m]: RPOPLPUSH with the same list as src and dst - linkedlist
[[0;32;49mok[0m]: RPOPLPUSH with linkedlist source and existing target linkedlist
[[0;32;49mok[0m]: RPOPLPUSH with linkedlist source and existing target ziplist
[[0;32;49mok[0m]: RPOPLPUSH base case - ziplist
[[0;32;49mok[0m]: RPOPLPUSH with the same list as src and dst - ziplist
[[0;32;49mok[0m]: RPOPLPUSH with ziplist source and existing target linkedlist
[[0;32;49mok[0m]: RPOPLPUSH with ziplist source and existing target ziplist
[[0;32;49mok[0m]: RPOPLPUSH against non existing key
[[0;32;49mok[0m]: RPOPLPUSH against non list src key
[[0;32;49mok[0m]: RPOPLPUSH against non list dst key
[[0;32;49mok[0m]: RPOPLPUSH against non existing src key
[[0;32;49mok[0m]: Basic LPOP/RPOP - linkedlist
[[0;32;49mok[0m]: Basic LPOP/RPOP - ziplist
[[0;32;49mok[0m]: LPOP/RPOP against non list value
[[0;32;49mok[0m]: Mass RPOP/LPOP - quicklist
[[0;32;49mok[0m]: Mass RPOP/LPOP - quicklist
[[0;32;49mok[0m]: LRANGE basics - linkedlist
[[0;32;49mok[0m]: LRANGE inverted indexes - linkedlist
[[0;32;49mok[0m]: LRANGE out of range indexes including the full list - linkedlist
[[0;32;49mok[0m]: LRANGE out of range negative end index - linkedlist
[[0;32;49mok[0m]: LRANGE basics - ziplist
[[0;32;49mok[0m]: LRANGE inverted indexes - ziplist
[[0;32;49mok[0m]: LRANGE out of range indexes including the full list - ziplist
[[0;32;49mok[0m]: LRANGE out of range negative end index - ziplist
[[0;32;49mok[0m]: LRANGE against non existing key
[[0;32;49mok[0m]: LTRIM basics - linkedlist
[[0;32;49mok[0m]: LTRIM out of range negative end index - linkedlist
[[0;32;49mok[0m]: LTRIM basics - ziplist
[[0;32;49mok[0m]: LTRIM out of range negative end index - ziplist
[[0;32;49mok[0m]: LSET - linkedlist
[[0;32;49mok[0m]: LSET out of range index - linkedlist
[[0;32;49mok[0m]: LSET - ziplist
[[0;32;49mok[0m]: LSET out of range index - ziplist
[[0;32;49mok[0m]: LSET against non existing key
[[0;32;49mok[0m]: LSET against non list value
[[0;32;49mok[0m]: LREM remove all the occurrences - linkedlist
[[0;32;49mok[0m]: LREM remove the first occurrence - linkedlist
[[0;32;49mok[0m]: LREM remove non existing element - linkedlist
[[0;32;49mok[0m]: LREM starting from tail with negative count - linkedlist
[[0;32;49mok[0m]: LREM starting from tail with negative count (2) - linkedlist
[[0;32;49mok[0m]: LREM deleting objects that may be int encoded - linkedlist
[[0;32;49mok[0m]: LREM remove all the occurrences - ziplist
[[0;32;49mok[0m]: LREM remove the first occurrence - ziplist
[[0;32;49mok[0m]: LREM remove non existing element - ziplist
[[0;32;49mok[0m]: LREM starting from tail with negative count - ziplist
[[0;32;49mok[0m]: LREM starting from tail with negative count (2) - ziplist
[[0;32;49mok[0m]: LREM deleting objects that may be int encoded - ziplist
[[0;32;49mok[0m]: Regression for bug 593 - chaining BRPOPLPUSH with other blocking cmds
[9/50 [0;33;49mdone[0m]: unit/type/list (13 seconds)
[1;37;49mTesting unit/type/list-2[0m
[[0;32;49mok[0m]: LTRIM stress testing - linkedlist
[[0;32;49mok[0m]: LTRIM stress testing - ziplist
[10/50 [0;33;49mdone[0m]: unit/type/list-2 (17 seconds)
[1;37;49mTesting unit/type/list-3[0m
[[0;32;49mok[0m]: Explicit regression for a list bug
[[0;32;49mok[0m]: Regression for quicklist #3343 bug
[[0;32;49mok[0m]: Stress tester for #3343-alike bugs
[[0;32;49mok[0m]: ziplist implementation: value encoding and backlink
[[0;32;49mok[0m]: ziplist implementation: encoding stress testing
[11/50 [0;33;49mdone[0m]: unit/type/list-3 (103 seconds)
[1;37;49mTesting unit/type/set[0m
[[0;32;49mok[0m]: SADD, SCARD, SISMEMBER, SMEMBERS basics - regular set
[[0;32;49mok[0m]: SADD, SCARD, SISMEMBER, SMEMBERS basics - intset
[[0;32;49mok[0m]: SADD against non set
[[0;32;49mok[0m]: SADD a non-integer against an intset
[[0;32;49mok[0m]: SADD an integer larger than 64 bits
[[0;32;49mok[0m]: SADD overflows the maximum allowed integers in an intset
[[0;32;49mok[0m]: Variadic SADD
[[0;32;49mok[0m]: Set encoding after DEBUG RELOAD
[[0;32;49mok[0m]: SREM basics - regular set
[[0;32;49mok[0m]: SREM basics - intset
[[0;32;49mok[0m]: SREM with multiple arguments
[[0;32;49mok[0m]: SREM variadic version with more args needed to destroy the key
[[0;32;49mok[0m]: Generated sets must be encoded as hashtable
[[0;32;49mok[0m]: SINTER with two sets - hashtable
[[0;32;49mok[0m]: SINTERSTORE with two sets - hashtable
[[0;32;49mok[0m]: SINTERSTORE with two sets, after a DEBUG RELOAD - hashtable
[[0;32;49mok[0m]: SUNION with two sets - hashtable
[[0;32;49mok[0m]: SUNIONSTORE with two sets - hashtable
[[0;32;49mok[0m]: SINTER against three sets - hashtable
[[0;32;49mok[0m]: SINTERSTORE with three sets - hashtable
[[0;32;49mok[0m]: SUNION with non existing keys - hashtable
[[0;32;49mok[0m]: SDIFF with two sets - hashtable
[[0;32;49mok[0m]: SDIFF with three sets - hashtable
[[0;32;49mok[0m]: SDIFFSTORE with three sets - hashtable
[[0;32;49mok[0m]: Generated sets must be encoded as intset
[[0;32;49mok[0m]: SINTER with two sets - intset
[[0;32;49mok[0m]: SINTERSTORE with two sets - intset
[[0;32;49mok[0m]: SINTERSTORE with two sets, after a DEBUG RELOAD - intset
[[0;32;49mok[0m]: SUNION with two sets - intset
[[0;32;49mok[0m]: SUNIONSTORE with two sets - intset
[[0;32;49mok[0m]: SINTER against three sets - intset
[[0;32;49mok[0m]: SINTERSTORE with three sets - intset
[[0;32;49mok[0m]: SUNION with non existing keys - intset
[[0;32;49mok[0m]: SDIFF with two sets - intset
[[0;32;49mok[0m]: SDIFF with three sets - intset
[[0;32;49mok[0m]: SDIFFSTORE with three sets - intset
[[0;32;49mok[0m]: SDIFF with first set empty
[[0;32;49mok[0m]: SDIFF with same set two times
[[0;32;49mok[0m]: SDIFF fuzzing
[[0;32;49mok[0m]: SINTER against non-set should throw error
[[0;32;49mok[0m]: SUNION against non-set should throw error
[[0;32;49mok[0m]: SINTER should handle non existing key as empty
[[0;32;49mok[0m]: SINTER with same integer elements but different encoding
[[0;32;49mok[0m]: SINTERSTORE against non existing keys should delete dstkey
[[0;32;49mok[0m]: SUNIONSTORE against non existing keys should delete dstkey
[[0;32;49mok[0m]: SPOP basics - hashtable
[[0;32;49mok[0m]: SPOP with <count>=1 - hashtable
[[0;32;49mok[0m]: SRANDMEMBER - hashtable
[[0;32;49mok[0m]: SPOP basics - intset
[[0;32;49mok[0m]: SPOP with <count>=1 - intset
[[0;32;49mok[0m]: SRANDMEMBER - intset
[[0;32;49mok[0m]: SPOP with <count>
[[0;32;49mok[0m]: SPOP with <count>
[[0;32;49mok[0m]: SPOP using integers, testing Knuth's and Floyd's algorithm
[[0;32;49mok[0m]: SPOP using integers with Knuth's algorithm
[[0;32;49mok[0m]: SPOP new implementation: code path #1
[[0;32;49mok[0m]: SPOP new implementation: code path #2
[[0;32;49mok[0m]: SPOP new implementation: code path #3
[[0;32;49mok[0m]: SRANDMEMBER with <count> against non existing key
[[0;32;49mok[0m]: SRANDMEMBER with <count> - hashtable
[[0;32;49mok[0m]: SRANDMEMBER with <count> - intset
[[0;32;49mok[0m]: SMOVE basics - from regular set to intset
[[0;32;49mok[0m]: SMOVE basics - from intset to regular set
[[0;32;49mok[0m]: SMOVE non existing key
[[0;32;49mok[0m]: SMOVE non existing src set
[[0;32;49mok[0m]: SMOVE from regular set to non existing destination set
[[0;32;49mok[0m]: SMOVE from intset to non existing destination set
[[0;32;49mok[0m]: SMOVE wrong src key type
[[0;32;49mok[0m]: SMOVE wrong dst key type
[[0;32;49mok[0m]: SMOVE with identical source and destination
[[0;32;49mok[0m]: intsets implementation stress testing
[12/50 [0;33;49mdone[0m]: unit/type/set (7 seconds)
[1;37;49mTesting unit/type/zset[0m
[[0;32;49mok[0m]: Check encoding - ziplist
[[0;32;49mok[0m]: ZSET basic ZADD and score update - ziplist
[[0;32;49mok[0m]: ZSET element can't be set to NaN with ZADD - ziplist
[[0;32;49mok[0m]: ZSET element can't be set to NaN with ZINCRBY
[[0;32;49mok[0m]: ZADD with options syntax error with incomplete pair
[[0;32;49mok[0m]: ZADD XX option without key - ziplist
[[0;32;49mok[0m]: ZADD XX existing key - ziplist
[[0;32;49mok[0m]: ZADD XX returns the number of elements actually added
[[0;32;49mok[0m]: ZADD XX updates existing elements score
[[0;32;49mok[0m]: ZADD XX and NX are not compatible
[[0;32;49mok[0m]: ZADD NX with non existing key
[[0;32;49mok[0m]: ZADD NX only add new elements without updating old ones
[[0;32;49mok[0m]: ZADD INCR works like ZINCRBY
[[0;32;49mok[0m]: ZADD INCR works with a single score-elemenet pair
[[0;32;49mok[0m]: ZADD CH option changes return value to all changed elements
[[0;32;49mok[0m]: ZINCRBY calls leading to NaN result in error
[[0;32;49mok[0m]: ZADD - Variadic version base case
[[0;32;49mok[0m]: ZADD - Return value is the number of actually added items
[[0;32;49mok[0m]: ZADD - Variadic version does not add nothing on single parsing err
[[0;32;49mok[0m]: ZADD - Variadic version will raise error on missing arg
[[0;32;49mok[0m]: ZINCRBY does not work variadic even if shares ZADD implementation
[[0;32;49mok[0m]: ZCARD basics - ziplist
[[0;32;49mok[0m]: ZREM removes key after last element is removed
[[0;32;49mok[0m]: ZREM variadic version
[[0;32;49mok[0m]: ZREM variadic version -- remove elements after key deletion
[[0;32;49mok[0m]: ZRANGE basics - ziplist
[[0;32;49mok[0m]: ZREVRANGE basics - ziplist
[[0;32;49mok[0m]: ZRANK/ZREVRANK basics - ziplist
[[0;32;49mok[0m]: ZRANK - after deletion - ziplist
[[0;32;49mok[0m]: ZINCRBY - can create a new sorted set - ziplist
[[0;32;49mok[0m]: ZINCRBY - increment and decrement - ziplist
[[0;32;49mok[0m]: ZINCRBY return value
[[0;32;49mok[0m]: ZRANGEBYSCORE/ZREVRANGEBYSCORE/ZCOUNT basics
[[0;32;49mok[0m]: ZRANGEBYSCORE with WITHSCORES
[[0;32;49mok[0m]: ZRANGEBYSCORE with LIMIT
[[0;32;49mok[0m]: ZRANGEBYSCORE with LIMIT and WITHSCORES
[[0;32;49mok[0m]: ZRANGEBYSCORE with non-value min or max
[[0;32;49mok[0m]: ZRANGEBYLEX/ZREVRANGEBYLEX/ZLEXCOUNT basics
[[0;32;49mok[0m]: ZLEXCOUNT advanced
[[0;32;49mok[0m]: ZRANGEBYSLEX with LIMIT
[[0;32;49mok[0m]: ZRANGEBYLEX with invalid lex range specifiers
[[0;32;49mok[0m]: ZREMRANGEBYSCORE basics
[[0;32;49mok[0m]: ZREMRANGEBYSCORE with non-value min or max
[[0;32;49mok[0m]: ZREMRANGEBYRANK basics
[[0;32;49mok[0m]: ZUNIONSTORE against non-existing key doesn't set destination - ziplist
[[0;32;49mok[0m]: ZUNIONSTORE with empty set - ziplist
[[0;32;49mok[0m]: ZUNIONSTORE basics - ziplist
[[0;32;49mok[0m]: ZUNIONSTORE with weights - ziplist
[[0;32;49mok[0m]: ZUNIONSTORE with a regular set and weights - ziplist
[[0;32;49mok[0m]: ZUNIONSTORE with AGGREGATE MIN - ziplist
[[0;32;49mok[0m]: ZUNIONSTORE with AGGREGATE MAX - ziplist
[[0;32;49mok[0m]: ZINTERSTORE basics - ziplist
[[0;32;49mok[0m]: ZINTERSTORE with weights - ziplist
[[0;32;49mok[0m]: ZINTERSTORE with a regular set and weights - ziplist
[[0;32;49mok[0m]: ZINTERSTORE with AGGREGATE MIN - ziplist
[[0;32;49mok[0m]: ZINTERSTORE with AGGREGATE MAX - ziplist
[[0;32;49mok[0m]: ZUNIONSTORE with +inf/-inf scores - ziplist
[[0;32;49mok[0m]: ZUNIONSTORE with NaN weights ziplist
[[0;32;49mok[0m]: ZINTERSTORE with +inf/-inf scores - ziplist
[[0;32;49mok[0m]: ZINTERSTORE with NaN weights ziplist
[[0;32;49mok[0m]: Basic ZPOP with a single key - ziplist
[[0;32;49mok[0m]: ZPOP with count - ziplist
[[0;32;49mok[0m]: BZPOP with a single existing sorted set - ziplist
[[0;32;49mok[0m]: BZPOP with multiple existing sorted sets - ziplist
[[0;32;49mok[0m]: BZPOP second sorted set has members - ziplist
[[0;32;49mok[0m]: Check encoding - skiplist
[[0;32;49mok[0m]: ZSET basic ZADD and score update - skiplist
[[0;32;49mok[0m]: ZSET element can't be set to NaN with ZADD - skiplist
[[0;32;49mok[0m]: ZSET element can't be set to NaN with ZINCRBY
[[0;32;49mok[0m]: ZADD with options syntax error with incomplete pair
[[0;32;49mok[0m]: ZADD XX option without key - skiplist
[[0;32;49mok[0m]: ZADD XX existing key - skiplist
[[0;32;49mok[0m]: ZADD XX returns the number of elements actually added
[[0;32;49mok[0m]: ZADD XX updates existing elements score
[[0;32;49mok[0m]: ZADD XX and NX are not compatible
[[0;32;49mok[0m]: ZADD NX with non existing key
[[0;32;49mok[0m]: ZADD NX only add new elements without updating old ones
[[0;32;49mok[0m]: ZADD INCR works like ZINCRBY
[[0;32;49mok[0m]: ZADD INCR works with a single score-elemenet pair
[[0;32;49mok[0m]: ZADD CH option changes return value to all changed elements
[[0;32;49mok[0m]: ZINCRBY calls leading to NaN result in error
[[0;32;49mok[0m]: ZADD - Variadic version base case
[[0;32;49mok[0m]: ZADD - Return value is the number of actually added items
[[0;32;49mok[0m]: ZADD - Variadic version does not add nothing on single parsing err
[[0;32;49mok[0m]: ZADD - Variadic version will raise error on missing arg
[[0;32;49mok[0m]: ZINCRBY does not work variadic even if shares ZADD implementation
[[0;32;49mok[0m]: ZCARD basics - skiplist
[[0;32;49mok[0m]: ZREM removes key after last element is removed
[[0;32;49mok[0m]: ZREM variadic version
[[0;32;49mok[0m]: ZREM variadic version -- remove elements after key deletion
[[0;32;49mok[0m]: ZRANGE basics - skiplist
[[0;32;49mok[0m]: ZREVRANGE basics - skiplist
[[0;32;49mok[0m]: ZRANK/ZREVRANK basics - skiplist
[[0;32;49mok[0m]: ZRANK - after deletion - skiplist
[[0;32;49mok[0m]: ZINCRBY - can create a new sorted set - skiplist
[[0;32;49mok[0m]: ZINCRBY - increment and decrement - skiplist
[[0;32;49mok[0m]: ZINCRBY return value
[[0;32;49mok[0m]: ZRANGEBYSCORE/ZREVRANGEBYSCORE/ZCOUNT basics
[[0;32;49mok[0m]: ZRANGEBYSCORE with WITHSCORES
[[0;32;49mok[0m]: ZRANGEBYSCORE with LIMIT
[[0;32;49mok[0m]: ZRANGEBYSCORE with LIMIT and WITHSCORES
[[0;32;49mok[0m]: ZRANGEBYSCORE with non-value min or max
[[0;32;49mok[0m]: ZRANGEBYLEX/ZREVRANGEBYLEX/ZLEXCOUNT basics
[[0;32;49mok[0m]: ZLEXCOUNT advanced
[[0;32;49mok[0m]: ZRANGEBYSLEX with LIMIT
[[0;32;49mok[0m]: ZRANGEBYLEX with invalid lex range specifiers
[[0;32;49mok[0m]: ZREMRANGEBYSCORE basics
[[0;32;49mok[0m]: ZREMRANGEBYSCORE with non-value min or max
[[0;32;49mok[0m]: ZREMRANGEBYRANK basics
[[0;32;49mok[0m]: ZUNIONSTORE against non-existing key doesn't set destination - skiplist
[[0;32;49mok[0m]: ZUNIONSTORE with empty set - skiplist
[[0;32;49mok[0m]: ZUNIONSTORE basics - skiplist
[[0;32;49mok[0m]: ZUNIONSTORE with weights - skiplist
[[0;32;49mok[0m]: ZUNIONSTORE with a regular set and weights - skiplist
[[0;32;49mok[0m]: ZUNIONSTORE with AGGREGATE MIN - skiplist
[[0;32;49mok[0m]: ZUNIONSTORE with AGGREGATE MAX - skiplist
[[0;32;49mok[0m]: ZINTERSTORE basics - skiplist
[[0;32;49mok[0m]: ZINTERSTORE with weights - skiplist
[[0;32;49mok[0m]: ZINTERSTORE with a regular set and weights - skiplist
[[0;32;49mok[0m]: ZINTERSTORE with AGGREGATE MIN - skiplist
[[0;32;49mok[0m]: ZINTERSTORE with AGGREGATE MAX - skiplist
[[0;32;49mok[0m]: ZUNIONSTORE with +inf/-inf scores - skiplist
[[0;32;49mok[0m]: ZUNIONSTORE with NaN weights skiplist
[[0;32;49mok[0m]: ZINTERSTORE with +inf/-inf scores - skiplist
[[0;32;49mok[0m]: ZINTERSTORE with NaN weights skiplist
[[0;32;49mok[0m]: Basic ZPOP with a single key - skiplist
[[0;32;49mok[0m]: ZPOP with count - skiplist
[[0;32;49mok[0m]: BZPOP with a single existing sorted set - skiplist
[[0;32;49mok[0m]: BZPOP with multiple existing sorted sets - skiplist
[[0;32;49mok[0m]: BZPOP second sorted set has members - skiplist
[[0;32;49mok[0m]: ZINTERSTORE regression with two sets, intset+hashtable
[[0;32;49mok[0m]: ZUNIONSTORE regression, should not create NaN in scores
[[0;32;49mok[0m]: ZINTERSTORE #516 regression, mixed sets and ziplist zsets
[[0;32;49mok[0m]: ZUNIONSTORE result is sorted
[[0;32;49mok[0m]: ZSET commands don't accept the empty strings as valid score
[[0;32;49mok[0m]: ZSCORE - ziplist
[[0;32;49mok[0m]: ZSCORE after a DEBUG RELOAD - ziplist
[[0;32;49mok[0m]: ZSET sorting stresser - ziplist
[[0;32;49mok[0m]: ZRANGEBYSCORE fuzzy test, 100 ranges in 128 element sorted set - ziplist
[[0;32;49mok[0m]: ZRANGEBYLEX fuzzy test, 100 ranges in 128 element sorted set - ziplist
[[0;32;49mok[0m]: ZREMRANGEBYLEX fuzzy test, 100 ranges in 128 element sorted set - ziplist
[[0;32;49mok[0m]: ZSETs skiplist implementation backlink consistency test - ziplist
[[0;32;49mok[0m]: ZSETs ZRANK augmented skip list stress testing - ziplist
[[0;32;49mok[0m]: BZPOPMIN, ZADD + DEL should not awake blocked client
[[0;32;49mok[0m]: BZPOPMIN, ZADD + DEL + SET should not awake blocked client
[[0;32;49mok[0m]: BZPOPMIN with same key multiple times should work
[[0;32;49mok[0m]: MULTI/EXEC is isolated from the point of view of BZPOPMIN
[[0;32;49mok[0m]: BZPOPMIN with variadic ZADD
[[0;32;49mok[0m]: BZPOPMIN with zero timeout should block indefinitely
[[0;32;49mok[0m]: ZSCORE - skiplist
[[0;32;49mok[0m]: ZSCORE after a DEBUG RELOAD - skiplist
[[0;32;49mok[0m]: ZSET sorting stresser - skiplist
[[0;32;49mok[0m]: ZRANGEBYSCORE fuzzy test, 100 ranges in 100 element sorted set - skiplist
[[0;32;49mok[0m]: ZRANGEBYLEX fuzzy test, 100 ranges in 100 element sorted set - skiplist
[[0;32;49mok[0m]: ZREMRANGEBYLEX fuzzy test, 100 ranges in 100 element sorted set - skiplist
[[0;32;49mok[0m]: ZSETs skiplist implementation backlink consistency test - skiplist
[[0;32;49mok[0m]: ZSETs ZRANK augmented skip list stress testing - skiplist
[[0;32;49mok[0m]: BZPOPMIN, ZADD + DEL should not awake blocked client
[[0;32;49mok[0m]: BZPOPMIN, ZADD + DEL + SET should not awake blocked client
[[0;32;49mok[0m]: BZPOPMIN with same key multiple times should work
[[0;32;49mok[0m]: MULTI/EXEC is isolated from the point of view of BZPOPMIN
[[0;32;49mok[0m]: BZPOPMIN with variadic ZADD
[[0;32;49mok[0m]: BZPOPMIN with zero timeout should block indefinitely
[[0;32;49mok[0m]: ZSET skiplist order consistency when elements are moved
[13/50 [0;33;49mdone[0m]: unit/type/zset (13 seconds)
[1;37;49mTesting unit/type/hash[0m
[[0;32;49mok[0m]: HSET/HLEN - Small hash creation
[[0;32;49mok[0m]: Is the small hash encoded with a ziplist?
[[0;32;49mok[0m]: HSET/HLEN - Big hash creation
[[0;32;49mok[0m]: Is the big hash encoded with an hash table?
[[0;32;49mok[0m]: HGET against the small hash
[[0;32;49mok[0m]: HGET against the big hash
[[0;32;49mok[0m]: HGET against non existing key
[[0;32;49mok[0m]: HSET in update and insert mode
[[0;32;49mok[0m]: HSETNX target key missing - small hash
[[0;32;49mok[0m]: HSETNX target key exists - small hash
[[0;32;49mok[0m]: HSETNX target key missing - big hash
[[0;32;49mok[0m]: HSETNX target key exists - big hash
[[0;32;49mok[0m]: HMSET wrong number of args
[[0;32;49mok[0m]: HMSET - small hash
[[0;32;49mok[0m]: HMSET - big hash
[[0;32;49mok[0m]: HMGET against non existing key and fields
[[0;32;49mok[0m]: HMGET against wrong type
[[0;32;49mok[0m]: HMGET - small hash
[[0;32;49mok[0m]: HMGET - big hash
[[0;32;49mok[0m]: HKEYS - small hash
[[0;32;49mok[0m]: HKEYS - big hash
[[0;32;49mok[0m]: HVALS - small hash
[[0;32;49mok[0m]: HVALS - big hash
[[0;32;49mok[0m]: HGETALL - small hash
[[0;32;49mok[0m]: HGETALL - big hash
[[0;32;49mok[0m]: HDEL and return value
[[0;32;49mok[0m]: HDEL - more than a single value
[[0;32;49mok[0m]: HDEL - hash becomes empty before deleting all specified fields
[[0;32;49mok[0m]: HEXISTS
[[0;32;49mok[0m]: Is a ziplist encoded Hash promoted on big payload?
[[0;32;49mok[0m]: HINCRBY against non existing database key
[[0;32;49mok[0m]: HINCRBY against non existing hash key
[[0;32;49mok[0m]: HINCRBY against hash key created by hincrby itself
[[0;32;49mok[0m]: HINCRBY against hash key originally set with HSET
[[0;32;49mok[0m]: HINCRBY over 32bit value
[[0;32;49mok[0m]: HINCRBY over 32bit value with over 32bit increment
[[0;32;49mok[0m]: HINCRBY fails against hash value with spaces (left)
[[0;32;49mok[0m]: HINCRBY fails against hash value with spaces (right)
[[0;32;49mok[0m]: HINCRBY can detect overflows
[[0;32;49mok[0m]: HINCRBYFLOAT against non existing database key
[[0;32;49mok[0m]: HINCRBYFLOAT against non existing hash key
[[0;32;49mok[0m]: HINCRBYFLOAT against hash key created by hincrby itself
[[0;32;49mok[0m]: HINCRBYFLOAT against hash key originally set with HSET
[[0;32;49mok[0m]: HINCRBYFLOAT over 32bit value
[[0;32;49mok[0m]: HINCRBYFLOAT over 32bit value with over 32bit increment
[[0;32;49mok[0m]: HINCRBYFLOAT fails against hash value with spaces (left)
[[0;32;49mok[0m]: HINCRBYFLOAT fails against hash value with spaces (right)
[[0;32;49mok[0m]: HSTRLEN against the small hash
[[0;32;49mok[0m]: HSTRLEN against the big hash
[[0;32;49mok[0m]: HSTRLEN against non existing field
[[0;32;49mok[0m]: HSTRLEN corner cases
[[0;32;49mok[0m]: Hash ziplist regression test for large keys
[[0;32;49mok[0m]: Hash fuzzing #1 - 10 fields
[[0;32;49mok[0m]: Hash fuzzing #2 - 10 fields
[[0;32;49mok[0m]: Hash fuzzing #1 - 512 fields
[[0;32;49mok[0m]: Hash fuzzing #2 - 512 fields
[[0;32;49mok[0m]: Stress test the hash ziplist -> hashtable encoding conversion
[14/50 [0;33;49mdone[0m]: unit/type/hash (5 seconds)
[1;37;49mTesting unit/type/stream[0m
[[0;32;49mok[0m]: XADD can add entries into a stream that XRANGE can fetch
[[0;32;49mok[0m]: XADD IDs are incremental
[[0;32;49mok[0m]: XADD IDs are incremental when ms is the same as well
[[0;32;49mok[0m]: XADD IDs correctly report an error when overflowing
[[0;32;49mok[0m]: XADD with MAXLEN option
[[0;32;49mok[0m]: XADD mass insertion and XLEN
[[0;32;49mok[0m]: XADD with ID 0-0
[[0;32;49mok[0m]: XRANGE COUNT works as expected
[[0;32;49mok[0m]: XREVRANGE COUNT works as expected
[[0;32;49mok[0m]: XRANGE can be used to iterate the whole stream
[[0;32;49mok[0m]: XREVRANGE returns the reverse of XRANGE
[[0;32;49mok[0m]: XREAD with non empty stream
[[0;32;49mok[0m]: Non blocking XREAD with empty streams
[[0;32;49mok[0m]: XREAD with non empty second stream
[[0;32;49mok[0m]: Blocking XREAD waiting new data
[[0;32;49mok[0m]: Blocking XREAD waiting old data
[[0;32;49mok[0m]: Blocking XREAD will not reply with an empty array
[[0;32;49mok[0m]: XREAD: XADD + DEL should not awake client
[[0;32;49mok[0m]: XREAD: XADD + DEL + LPUSH should not awake client
[[0;32;49mok[0m]: XREAD with same stream name multiple times should work
[[0;32;49mok[0m]: XREAD + multiple XADD inside transaction
[[0;32;49mok[0m]: XDEL basic test
[[0;32;49mok[0m]: XDEL fuzz test
[[0;32;49mok[0m]: XRANGE fuzzing
[[0;32;49mok[0m]: XREVRANGE regression test for issue #5006
[[0;32;49mok[0m]: XREAD streamID edge (no-blocking)
[[0;32;49mok[0m]: XREAD streamID edge (blocking)
[[0;32;49mok[0m]: XADD streamID edge
[[0;32;49mok[0m]: XADD with MAXLEN > xlen can propagate correctly
[[0;32;49mok[0m]: XADD with ~ MAXLEN can propagate correctly
[[0;32;49mok[0m]: XTRIM with ~ MAXLEN can propagate correctly
[[0;32;49mok[0m]: XADD can CREATE an empty stream
[[0;32;49mok[0m]: XSETID can set a specific ID
[[0;32;49mok[0m]: XSETID cannot SETID with smaller ID
[[0;32;49mok[0m]: XSETID cannot SETID on non-existent key
[[0;32;49mok[0m]: Empty stream can be rewrite into AOF correctly
[[0;32;49mok[0m]: Stream can be rewrite into AOF correctly after XDEL lastid
[15/50 [0;33;49mdone[0m]: unit/type/stream (29 seconds)
[1;37;49mTesting unit/type/stream-cgroups[0m
[[0;32;49mok[0m]: XGROUP CREATE: creation and duplicate group name detection
[[0;32;49mok[0m]: XGROUP CREATE: automatic stream creation fails without MKSTREAM
[[0;32;49mok[0m]: XGROUP CREATE: automatic stream creation works with MKSTREAM
[[0;32;49mok[0m]: XREADGROUP will return only new elements
[[0;32;49mok[0m]: XREADGROUP can read the history of the elements we own
[[0;32;49mok[0m]: XPENDING is able to return pending items
[[0;32;49mok[0m]: XPENDING can return single consumer items
[[0;32;49mok[0m]: XACK is able to remove items from the client/group PEL
[[0;32;49mok[0m]: XACK can't remove the same item multiple times
[[0;32;49mok[0m]: XACK is able to accept multiple arguments
[[0;32;49mok[0m]: PEL NACK reassignment after XGROUP SETID event
[[0;32;49mok[0m]: XREADGROUP will not report data on empty history. Bug #5577
[[0;32;49mok[0m]: XREADGROUP history reporting of deleted entries. Bug #5570
[[0;32;49mok[0m]: Blocking XREADGROUP will not reply with an empty array
[[0;32;49mok[0m]: XCLAIM can claim PEL items from another consumer
[[0;32;49mok[0m]: XCLAIM without JUSTID increments delivery count
[[0;32;49mok[0m]: Consumer group last ID propagation to slave (NOACK=0)
[[0;32;49mok[0m]: Consumer group last ID propagation to slave (NOACK=1)
[16/50 [0;33;49mdone[0m]: unit/type/stream-cgroups (3 seconds)
[1;37;49mTesting unit/sort[0m
[[0;32;49mok[0m]: Old Ziplist: SORT BY key
[[0;32;49mok[0m]: Old Ziplist: SORT BY key with limit
[[0;32;49mok[0m]: Old Ziplist: SORT BY hash field
[[0;32;49mok[0m]: Old Linked list: SORT BY key
[[0;32;49mok[0m]: Old Linked list: SORT BY key with limit
[[0;32;49mok[0m]: Old Linked list: SORT BY hash field
[[0;32;49mok[0m]: Old Big Linked list: SORT BY key
[[0;32;49mok[0m]: Old Big Linked list: SORT BY key with limit
[[0;32;49mok[0m]: Old Big Linked list: SORT BY hash field
[[0;32;49mok[0m]: Intset: SORT BY key
[[0;32;49mok[0m]: Intset: SORT BY key with limit
[[0;32;49mok[0m]: Intset: SORT BY hash field
[[0;32;49mok[0m]: Hash table: SORT BY key
[[0;32;49mok[0m]: Hash table: SORT BY key with limit
[[0;32;49mok[0m]: Hash table: SORT BY hash field
[[0;32;49mok[0m]: Big Hash table: SORT BY key
[[0;32;49mok[0m]: Big Hash table: SORT BY key with limit
[[0;32;49mok[0m]: Big Hash table: SORT BY hash field
[[0;32;49mok[0m]: SORT GET #
[[0;32;49mok[0m]: SORT GET <const>
[[0;32;49mok[0m]: SORT GET (key and hash) with sanity check
[[0;32;49mok[0m]: SORT BY key STORE
[[0;32;49mok[0m]: SORT BY hash field STORE
[[0;32;49mok[0m]: SORT extracts STORE correctly
[[0;32;49mok[0m]: SORT extracts multiple STORE correctly
[[0;32;49mok[0m]: SORT DESC
[[0;32;49mok[0m]: SORT ALPHA against integer encoded strings
[[0;32;49mok[0m]: SORT sorted set
[[0;32;49mok[0m]: SORT sorted set BY nosort should retain ordering
[[0;32;49mok[0m]: SORT sorted set BY nosort + LIMIT
[[0;32;49mok[0m]: SORT sorted set BY nosort works as expected from scripts
[[0;32;49mok[0m]: SORT sorted set: +inf and -inf handling
[[0;32;49mok[0m]: SORT regression for issue #19, sorting floats
[[0;32;49mok[0m]: SORT with STORE returns zero if result is empty (github issue 224)
[[0;32;49mok[0m]: SORT with STORE does not create empty lists (github issue 224)
[[0;32;49mok[0m]: SORT with STORE removes key if result is empty (github issue 227)
[[0;32;49mok[0m]: SORT with BY <constant> and STORE should still order output
[[0;32;49mok[0m]: SORT will complain with numerical sorting and bad doubles (1)
[[0;32;49mok[0m]: SORT will complain with numerical sorting and bad doubles (2)
[[0;32;49mok[0m]: SORT BY sub-sorts lexicographically if score is the same
[[0;32;49mok[0m]: SORT GET with pattern ending with just -> does not get hash field
[[0;32;49mok[0m]: SORT by nosort retains native order for lists
[[0;32;49mok[0m]: SORT by nosort plus store retains native order for lists
[[0;32;49mok[0m]: SORT by nosort with limit returns based on original list order
[[0;32;49mok[0m]: SORT speed, 100 element list BY key, 100 times
[[0;32;49mok[0m]: SORT speed, 100 element list BY hash field, 100 times
[[0;32;49mok[0m]: SORT speed, 100 element list directly, 100 times
[[0;32;49mok[0m]: SORT speed, 100 element list BY <const>, 100 times
[17/50 [0;33;49mdone[0m]: unit/sort (9 seconds)
[1;37;49mTesting unit/expire[0m
[[0;32;49mok[0m]: EXPIRE - set timeouts multiple times
[[0;32;49mok[0m]: EXPIRE - It should be still possible to read 'x'
[[0;32;49mok[0m]: EXPIRE - After 2.1 seconds the key should no longer be here
[[0;32;49mok[0m]: EXPIRE - write on expire should work
[[0;32;49mok[0m]: EXPIREAT - Check for EXPIRE alike behavior
[[0;32;49mok[0m]: SETEX - Set + Expire combo operation. Check for TTL
[[0;32;49mok[0m]: SETEX - Check value
[[0;32;49mok[0m]: SETEX - Overwrite old key
[[0;32;49mok[0m]: SETEX - Wait for the key to expire
[[0;32;49mok[0m]: SETEX - Wrong time parameter
[[0;32;49mok[0m]: PERSIST can undo an EXPIRE
[[0;32;49mok[0m]: PERSIST returns 0 against non existing or non volatile keys
[[0;32;49mok[0m]: EXPIRE pricision is now the millisecond
[[0;32;49mok[0m]: PEXPIRE/PSETEX/PEXPIREAT can set sub-second expires
[[0;32;49mok[0m]: TTL returns time to live in seconds
[[0;32;49mok[0m]: PTTL returns time to live in milliseconds
[[0;32;49mok[0m]: TTL / PTTL return -1 if key has no expire
[[0;32;49mok[0m]: TTL / PTTL return -2 if key does not exit
[[0;32;49mok[0m]: Redis should actively expire keys incrementally
[[0;32;49mok[0m]: Redis should lazy expire keys
[[0;32;49mok[0m]: EXPIRE should not resurrect keys (issue #1026)
[[0;32;49mok[0m]: 5 keys in, 5 keys out
[[0;32;49mok[0m]: EXPIRE with empty string as TTL should report an error
[[0;32;49mok[0m]: SET - use EX/PX option, TTL should not be reseted after loadaof
[18/50 [0;33;49mdone[0m]: unit/expire (15 seconds)
[1;37;49mTesting unit/other[0m
[[0;32;49mok[0m]: SAVE - make sure there are all the types as values
[[0;32;49mok[0m]: FUZZ stresser with data model binary
[[0;32;49mok[0m]: FUZZ stresser with data model alpha
[[0;32;49mok[0m]: FUZZ stresser with data model compr
[[0;32;49mok[0m]: BGSAVE
[[0;32;49mok[0m]: SELECT an out of range DB
[[0;32;49mok[0m]: EXPIRES after a reload (snapshot + append only file rewrite)
[[0;32;49mok[0m]: EXPIRES after AOF reload (without rewrite)
[[0;32;49mok[0m]: PIPELINING stresser (also a regression for the old epoll bug)
[[0;32;49mok[0m]: APPEND basics
[[0;32;49mok[0m]: APPEND basics, integer encoded values
[[0;32;49mok[0m]: APPEND fuzzing
[[0;32;49mok[0m]: FLUSHDB
[[0;32;49mok[0m]: Perform a final SAVE to leave a clean DB on disk
[19/50 [0;33;49mdone[0m]: unit/other (9 seconds)
[1;37;49mTesting unit/multi[0m
[[0;32;49mok[0m]: MUTLI / EXEC basics
[[0;32;49mok[0m]: DISCARD
[[0;32;49mok[0m]: Nested MULTI are not allowed
[[0;32;49mok[0m]: MULTI where commands alter argc/argv
[[0;32;49mok[0m]: WATCH inside MULTI is not allowed
[[0;32;49mok[0m]: EXEC fails if there are errors while queueing commands #1
[[0;32;49mok[0m]: EXEC fails if there are errors while queueing commands #2
[[0;32;49mok[0m]: If EXEC aborts, the client MULTI state is cleared
[[0;32;49mok[0m]: EXEC works on WATCHed key not modified
[[0;32;49mok[0m]: EXEC fail on WATCHed key modified (1 key of 1 watched)
[[0;32;49mok[0m]: EXEC fail on WATCHed key modified (1 key of 5 watched)
[[0;32;49mok[0m]: EXEC fail on WATCHed key modified by SORT with STORE even if the result is empty
[[0;32;49mok[0m]: After successful EXEC key is no longer watched
[[0;32;49mok[0m]: After failed EXEC key is no longer watched
[[0;32;49mok[0m]: It is possible to UNWATCH
[[0;32;49mok[0m]: UNWATCH when there is nothing watched works as expected
[[0;32;49mok[0m]: FLUSHALL is able to touch the watched keys
[[0;32;49mok[0m]: FLUSHALL does not touch non affected keys
[[0;32;49mok[0m]: FLUSHDB is able to touch the watched keys
[[0;32;49mok[0m]: FLUSHDB does not touch non affected keys
[[0;32;49mok[0m]: WATCH is able to remember the DB a key belongs to
[[0;32;49mok[0m]: WATCH will consider touched keys target of EXPIRE
[[0;32;49mok[0m]: WATCH will not consider touched expired keys
[[0;32;49mok[0m]: DISCARD should clear the WATCH dirty flag on the client
[[0;32;49mok[0m]: DISCARD should UNWATCH all the keys
[[0;32;49mok[0m]: MULTI / EXEC is propagated correctly (single write command)
[[0;32;49mok[0m]: MULTI / EXEC is propagated correctly (empty transaction)
[[0;32;49mok[0m]: MULTI / EXEC is propagated correctly (read-only commands)
[[0;32;49mok[0m]: MULTI / EXEC is propagated correctly (write command, no effect)
[20/50 [0;33;49mdone[0m]: unit/multi (2 seconds)
[1;37;49mTesting unit/quit[0m
[[0;32;49mok[0m]: QUIT returns OK
[[0;32;49mok[0m]: Pipelined commands after QUIT must not be executed
[[0;32;49mok[0m]: Pipelined commands after QUIT that exceed read buffer size
[21/50 [0;33;49mdone[0m]: unit/quit (0 seconds)
[1;37;49mTesting unit/aofrw[0m
[[0;32;49mok[0m]: AOF rewrite during write load: RDB preamble=yes
[[0;32;49mok[0m]: AOF rewrite during write load: RDB preamble=no
[[0;32;49mok[0m]: Turning off AOF kills the background writing child if any
[[0;32;49mok[0m]: AOF rewrite of list with quicklist encoding, string data
[[0;32;49mok[0m]: AOF rewrite of list with quicklist encoding, int data
[[0;32;49mok[0m]: AOF rewrite of set with intset encoding, string data
[[0;32;49mok[0m]: AOF rewrite of set with hashtable encoding, string data
[[0;32;49mok[0m]: AOF rewrite of set with intset encoding, int data
[[0;32;49mok[0m]: AOF rewrite of set with hashtable encoding, int data
[[0;32;49mok[0m]: AOF rewrite of hash with ziplist encoding, string data
[[0;32;49mok[0m]: AOF rewrite of hash with hashtable encoding, string data
[[0;32;49mok[0m]: AOF rewrite of hash with ziplist encoding, int data
[[0;32;49mok[0m]: AOF rewrite of hash with hashtable encoding, int data
[[0;32;49mok[0m]: AOF rewrite of zset with ziplist encoding, string data
[[0;32;49mok[0m]: AOF rewrite of zset with skiplist encoding, string data
[[0;32;49mok[0m]: AOF rewrite of zset with ziplist encoding, int data
[[0;32;49mok[0m]: AOF rewrite of zset with skiplist encoding, int data
[[0;32;49mok[0m]: BGREWRITEAOF is delayed if BGSAVE is in progress
[[0;32;49mok[0m]: BGREWRITEAOF is refused if already in progress
[22/50 [0;33;49mdone[0m]: unit/aofrw (97 seconds)
[1;37;49mTesting integration/block-repl[0m
[[0;32;49mok[0m]: First server should have role slave after SLAVEOF
[[0;32;49mok[0m]: Test replication with blocking lists and sorted sets operations
[23/50 [0;33;49mdone[0m]: integration/block-repl (27 seconds)
[1;37;49mTesting integration/replication[0m
[[0;32;49mok[0m]: Slave enters handshake
[[0;32;49mok[0m]: Slave is able to detect timeout during handshake
[[0;32;49mok[0m]: Set instance A as slave of B
[[0;32;49mok[0m]: BRPOPLPUSH replication, when blocking against empty list
[[0;32;49mok[0m]: BRPOPLPUSH replication, list exists
[[0;32;49mok[0m]: BLPOP followed by role change, issue #2473
[[0;32;49mok[0m]: Second server should have role master at first
[[0;32;49mok[0m]: SLAVEOF should start with link status "down"
[[0;32;49mok[0m]: The role should immediately be changed to "replica"
[[0;32;49mok[0m]: Sync should have transferred keys from master
[[0;32;49mok[0m]: The link status should be up
[[0;32;49mok[0m]: SET on the master should immediately propagate
[[0;32;49mok[0m]: FLUSHALL should replicate
[[0;32;49mok[0m]: ROLE in master reports master with a slave
[[0;32;49mok[0m]: ROLE in slave reports slave in connected state
[[0;32;49mok[0m]: Connect multiple replicas at the same time (issue #141), diskless=no
[[0;32;49mok[0m]: Connect multiple replicas at the same time (issue #141), diskless=yes
[[0;32;49mok[0m]: Master stream is correctly processed while the replica has a script in -BUSY state
[24/50 [0;33;49mdone[0m]: integration/replication (148 seconds)
[1;37;49mTesting integration/replication-2[0m
[[0;32;49mok[0m]: First server should have role slave after SLAVEOF
[[0;32;49mok[0m]: If min-slaves-to-write is honored, write is accepted
[[0;32;49mok[0m]: No write if min-slaves-to-write is < attached slaves
[[0;32;49mok[0m]: If min-slaves-to-write is honored, write is accepted (again)
[[0;32;49mok[0m]: No write if min-slaves-max-lag is > of the slave lag
[[0;32;49mok[0m]: min-slaves-to-write is ignored by slaves
[[0;32;49mok[0m]: MASTER and SLAVE dataset should be identical after complex ops
[25/50 [0;33;49mdone[0m]: integration/replication-2 (16 seconds)
[1;37;49mTesting integration/replication-3[0m
[[0;32;49mok[0m]: First server should have role slave after SLAVEOF
[[0;32;49mok[0m]: MASTER and SLAVE consistency with expire
[[0;32;49mok[0m]: Slave is able to evict keys created in writable slaves
[[0;32;49mok[0m]: First server should have role slave after SLAVEOF
[[0;32;49mok[0m]: MASTER and SLAVE consistency with EVALSHA replication
[[0;32;49mok[0m]: SLAVE can reload "lua" AUX RDB fields of duplicated scripts
[26/50 [0;33;49mdone[0m]: integration/replication-3 (32 seconds)
[1;37;49mTesting integration/replication-4[0m
[[0;32;49mok[0m]: First server should have role slave after SLAVEOF
[[0;32;49mok[0m]: Test replication with parallel clients writing in differnet DBs
[[0;32;49mok[0m]: First server should have role slave after SLAVEOF
[[0;32;49mok[0m]: With min-slaves-to-write (1,3): master should be writable
[[0;32;49mok[0m]: With min-slaves-to-write (2,3): master should not be writable
[[0;32;49mok[0m]: With min-slaves-to-write: master not writable with lagged slave
[[0;32;49mok[0m]: First server should have role slave after SLAVEOF
[[0;32;49mok[0m]: Replication: commands with many arguments (issue #1221)
[[0;32;49mok[0m]: Replication of SPOP command -- alsoPropagate() API
[27/50 [0;33;49mdone[0m]: integration/replication-4 (34 seconds)
[1;37;49mTesting integration/replication-psync[0m
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: no reconnection, just sync (diskless: no, reconnect: 0)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: ok psync (diskless: no, reconnect: 1)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: no backlog (diskless: no, reconnect: 1)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: ok after delay (diskless: no, reconnect: 1)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: backlog expired (diskless: no, reconnect: 1)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: no reconnection, just sync (diskless: yes, reconnect: 0)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: no backlog (diskless: yes, reconnect: 1)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: ok after delay (diskless: yes, reconnect: 1)
[[0;32;49mok[0m]: Slave should be able to synchronize with the master
[[0;32;49mok[0m]: Detect write load to master
[[0;32;49mok[0m]: Test replication partial resync: backlog expired (diskless: yes, reconnect: 1)
[28/50 [0;33;49mdone[0m]: integration/replication-psync (100 seconds)
[1;37;49mTesting integration/aof[0m
[[0;32;49mok[0m]: Unfinished MULTI: Server should start if load-truncated is yes
[[0;32;49mok[0m]: Short read: Server should start if load-truncated is yes
[[0;32;49mok[0m]: Truncated AOF loaded: we expect foo to be equal to 5
[[0;32;49mok[0m]: Append a new command after loading an incomplete AOF
[[0;32;49mok[0m]: Short read + command: Server should start
[[0;32;49mok[0m]: Truncated AOF loaded: we expect foo to be equal to 6 now
[[0;32;49mok[0m]: Bad format: Server should have logged an error
[[0;32;49mok[0m]: Unfinished MULTI: Server should have logged an error
[[0;32;49mok[0m]: Short read: Server should have logged an error
[[0;32;49mok[0m]: Short read: Utility should confirm the AOF is not valid
[[0;32;49mok[0m]: Short read: Utility should be able to fix the AOF
[[0;32;49mok[0m]: Fixed AOF: Server should have been started
[[0;32;49mok[0m]: Fixed AOF: Keyspace should contain values that were parseable
[[0;32;49mok[0m]: AOF+SPOP: Server should have been started
[[0;32;49mok[0m]: AOF+SPOP: Set should have 1 member
[[0;32;49mok[0m]: AOF+SPOP: Server should have been started
[[0;32;49mok[0m]: AOF+SPOP: Set should have 1 member
[[0;32;49mok[0m]: AOF+EXPIRE: Server should have been started
[[0;32;49mok[0m]: AOF+EXPIRE: List should be empty
[[0;32;49mok[0m]: Redis should not try to convert DEL into EXPIREAT for EXPIRE -1
[29/50 [0;33;49mdone[0m]: integration/aof (3 seconds)
[1;37;49mTesting integration/rdb[0m
[[0;32;49mok[0m]: RDB encoding loading test
[[0;32;49mok[0m]: Server started empty with non-existing RDB file
[[0;32;49mok[0m]: Server started empty with empty RDB file
[[0;32;49mok[0m]: Test RDB stream encoding
[[0;32;49mok[0m]: Server should not start if RDB is corrupted
[30/50 [0;33;49mdone[0m]: integration/rdb (2 seconds)
[1;37;49mTesting integration/convert-zipmap-hash-on-load[0m
[[0;32;49mok[0m]: RDB load zipmap hash: converts to ziplist
[[0;32;49mok[0m]: RDB load zipmap hash: converts to hash table when hash-max-ziplist-entries is exceeded
[[0;32;49mok[0m]: RDB load zipmap hash: converts to hash table when hash-max-ziplist-value is exceeded
[31/50 [0;33;49mdone[0m]: integration/convert-zipmap-hash-on-load (0 seconds)
[1;37;49mTesting integration/logging[0m
[[0;32;49mok[0m]: Server is able to generate a stack trace on selected systems
[32/50 [0;33;49mdone[0m]: integration/logging (1 seconds)
[1;37;49mTesting integration/psync2[0m
[[0;32;49mok[0m]: PSYNC2: --- CYCLE 1 ---
[[0;32;49mok[0m]: PSYNC2: [NEW LAYOUT] Set #1 as master
[[0;32;49mok[0m]: PSYNC2: Set #3 to replicate from #1
[[0;32;49mok[0m]: PSYNC2: Set #0 to replicate from #3
[[0;32;49mok[0m]: PSYNC2: Set #4 to replicate from #1
[[0;32;49mok[0m]: PSYNC2: Set #2 to replicate from #0
[[0;32;49mok[0m]: PSYNC2: cluster is consistent after failover
[[0;32;49mok[0m]: PSYNC2: generate load while killing replication links
[[0;32;49mok[0m]: PSYNC2: cluster is consistent after load (x = 39955)
[[0;32;49mok[0m]: PSYNC2: total sum of full synchronizations is exactly 4
[[0;32;49mok[0m]: PSYNC2: --- CYCLE 2 ---
[[0;32;49mok[0m]: PSYNC2: [NEW LAYOUT] Set #2 as master
[[0;32;49mok[0m]: PSYNC2: Set #1 to replicate from #2
[[0;32;49mok[0m]: PSYNC2: Set #4 to replicate from #1
[[0;32;49mok[0m]: PSYNC2: Set #0 to replicate from #2
[[0;32;49mok[0m]: PSYNC2: Set #3 to replicate from #0
[[0;32;49mok[0m]: PSYNC2: cluster is consistent after failover
[[0;32;49mok[0m]: PSYNC2: generate load while killing replication links
[[0;32;49mok[0m]: PSYNC2: cluster is consistent after load (x = 82245)
[[0;32;49mok[0m]: PSYNC2: total sum of full synchronizations is exactly 4
[[0;32;49mok[0m]: PSYNC2: --- CYCLE 3 ---
[[0;32;49mok[0m]: PSYNC2: [NEW LAYOUT] Set #0 as master
[[0;32;49mok[0m]: PSYNC2: Set #2 to replicate from #0
[[0;32;49mok[0m]: PSYNC2: Set #3 to replicate from #2
[[0;32;49mok[0m]: PSYNC2: Set #1 to replicate from #2
[[0;32;49mok[0m]: PSYNC2: Set #4 to replicate from #3
[[0;32;49mok[0m]: PSYNC2: cluster is consistent after failover
[[0;32;49mok[0m]: PSYNC2: generate load while killing replication links
[[0;32;49mok[0m]: PSYNC2: cluster is consistent after load (x = 128770)
[[0;32;49mok[0m]: PSYNC2: total sum of full synchronizations is exactly 4
[[0;32;49mok[0m]: PSYNC2: Bring the master back again for next test
[[0;32;49mok[0m]: PSYNC2: Partial resync after restart using RDB aux fields
[[0;32;49mok[0m]: PSYNC2: Replica RDB restart with EVALSHA in backlog issue #4483
[33/50 [0;33;49mdone[0m]: integration/psync2 (28 seconds)
[1;37;49mTesting integration/psync2-reg[0m
[[0;32;49mok[0m]: PSYNC2 #3899 regression: setup
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill first replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: kill chained replica
[[0;32;49mok[0m]: PSYNC2 #3899 regression: verify consistency
[34/50 [0;33;49mdone[0m]: integration/psync2-reg (23 seconds)
[1;37;49mTesting unit/pubsub[0m
[[0;32;49mok[0m]: Pub/Sub PING
[[0;32;49mok[0m]: PUBLISH/SUBSCRIBE basics
[[0;32;49mok[0m]: PUBLISH/SUBSCRIBE with two clients
[[0;32;49mok[0m]: PUBLISH/SUBSCRIBE after UNSUBSCRIBE without arguments
[[0;32;49mok[0m]: SUBSCRIBE to one channel more than once
[[0;32;49mok[0m]: UNSUBSCRIBE from non-subscribed channels
[[0;32;49mok[0m]: PUBLISH/PSUBSCRIBE basics
[[0;32;49mok[0m]: PUBLISH/PSUBSCRIBE with two clients
[[0;32;49mok[0m]: PUBLISH/PSUBSCRIBE after PUNSUBSCRIBE without arguments
[[0;32;49mok[0m]: PUNSUBSCRIBE from non-subscribed channels
[[0;32;49mok[0m]: NUMSUB returns numbers, not strings (#1561)
[[0;32;49mok[0m]: Mix SUBSCRIBE and PSUBSCRIBE
[[0;32;49mok[0m]: PUNSUBSCRIBE and UNSUBSCRIBE should always reply
[[0;32;49mok[0m]: Keyspace notifications: we receive keyspace notifications
[[0;32;49mok[0m]: Keyspace notifications: we receive keyevent notifications
[[0;32;49mok[0m]: Keyspace notifications: we can receive both kind of events
[[0;32;49mok[0m]: Keyspace notifications: we are able to mask events
[[0;32;49mok[0m]: Keyspace notifications: general events test
[[0;32;49mok[0m]: Keyspace notifications: list events test
[[0;32;49mok[0m]: Keyspace notifications: set events test
[[0;32;49mok[0m]: Keyspace notifications: zset events test
[[0;32;49mok[0m]: Keyspace notifications: hash events test
[[0;32;49mok[0m]: Keyspace notifications: expired events (triggered expire)
[[0;32;49mok[0m]: Keyspace notifications: expired events (background expire)
[[0;32;49mok[0m]: Keyspace notifications: evicted events
[[0;32;49mok[0m]: Keyspace notifications: test CONFIG GET/SET of event flags
[35/50 [0;33;49mdone[0m]: unit/pubsub (0 seconds)
[1;37;49mTesting unit/slowlog[0m
[[0;32;49mok[0m]: SLOWLOG - check that it starts with an empty log
[[0;32;49mok[0m]: SLOWLOG - only logs commands taking more time than specified
[[0;32;49mok[0m]: SLOWLOG - max entries is correctly handled
[[0;32;49mok[0m]: SLOWLOG - GET optional argument to limit output len works
[[0;32;49mok[0m]: SLOWLOG - RESET subcommand works
[[0;32;49mok[0m]: SLOWLOG - logged entry sanity check
[[0;32;49mok[0m]: SLOWLOG - commands with too many arguments are trimmed
[[0;32;49mok[0m]: SLOWLOG - too long arguments are trimmed
[[0;32;49mok[0m]: SLOWLOG - EXEC is not logged, just executed commands
[[0;32;49mok[0m]: SLOWLOG - can clean older entires
[[0;32;49mok[0m]: SLOWLOG - can be disabled
[36/50 [0;33;49mdone[0m]: unit/slowlog (2 seconds)
[1;37;49mTesting unit/scripting[0m
[[0;32;49mok[0m]: EVAL - Does Lua interpreter replies to our requests?
[[0;32;49mok[0m]: EVAL - Lua integer -> Redis protocol type conversion
[[0;32;49mok[0m]: EVAL - Lua string -> Redis protocol type conversion
[[0;32;49mok[0m]: EVAL - Lua true boolean -> Redis protocol type conversion
[[0;32;49mok[0m]: EVAL - Lua false boolean -> Redis protocol type conversion
[[0;32;49mok[0m]: EVAL - Lua status code reply -> Redis protocol type conversion
[[0;32;49mok[0m]: EVAL - Lua error reply -> Redis protocol type conversion
[[0;32;49mok[0m]: EVAL - Lua table -> Redis protocol type conversion
[[0;32;49mok[0m]: EVAL - Are the KEYS and ARGV arrays populated correctly?
[[0;32;49mok[0m]: EVAL - is Lua able to call Redis API?
[[0;32;49mok[0m]: EVALSHA - Can we call a SHA1 if already defined?
[[0;32;49mok[0m]: EVALSHA - Can we call a SHA1 in uppercase?
[[0;32;49mok[0m]: EVALSHA - Do we get an error on invalid SHA1?
[[0;32;49mok[0m]: EVALSHA - Do we get an error on non defined SHA1?
[[0;32;49mok[0m]: EVAL - Redis integer -> Lua type conversion
[[0;32;49mok[0m]: EVAL - Redis bulk -> Lua type conversion
[[0;32;49mok[0m]: EVAL - Redis multi bulk -> Lua type conversion
[[0;32;49mok[0m]: EVAL - Redis status reply -> Lua type conversion
[[0;32;49mok[0m]: EVAL - Redis error reply -> Lua type conversion
[[0;32;49mok[0m]: EVAL - Redis nil bulk reply -> Lua type conversion
[[0;32;49mok[0m]: EVAL - Is the Lua client using the currently selected DB?
[[0;32;49mok[0m]: EVAL - SELECT inside Lua should not affect the caller
[[0;32;49mok[0m]: EVAL - Scripts can't run certain commands
[[0;32;49mok[0m]: EVAL - Scripts can't run certain commands
[[0;32;49mok[0m]: EVAL - No arguments to redis.call/pcall is considered an error
[[0;32;49mok[0m]: EVAL - redis.call variant raises a Lua error on Redis cmd error (1)
[[0;32;49mok[0m]: EVAL - redis.call variant raises a Lua error on Redis cmd error (1)
[[0;32;49mok[0m]: EVAL - redis.call variant raises a Lua error on Redis cmd error (1)
[[0;32;49mok[0m]: EVAL - JSON numeric decoding
[[0;32;49mok[0m]: EVAL - JSON string decoding
[[0;32;49mok[0m]: EVAL - cmsgpack can pack double?
[[0;32;49mok[0m]: EVAL - cmsgpack can pack negative int64?
[[0;32;49mok[0m]: EVAL - cmsgpack can pack and unpack circular references?
[[0;32;49mok[0m]: EVAL - Numerical sanity check from bitop
[[0;32;49mok[0m]: EVAL - Verify minimal bitop functionality
[[0;32;49mok[0m]: EVAL - Able to parse trailing comments
[[0;32;49mok[0m]: SCRIPTING FLUSH - is able to clear the scripts cache?
[[0;32;49mok[0m]: SCRIPT EXISTS - can detect already defined scripts?
[[0;32;49mok[0m]: SCRIPT LOAD - is able to register scripts in the scripting cache
[[0;32;49mok[0m]: In the context of Lua the output of random commands gets ordered
[[0;32;49mok[0m]: SORT is normally not alpha re-ordered for the scripting engine
[[0;32;49mok[0m]: SORT BY <constant> output gets ordered for scripting
[[0;32;49mok[0m]: SORT BY <constant> with GET gets ordered for scripting
[[0;32;49mok[0m]: redis.sha1hex() implementation
[[0;32;49mok[0m]: Globals protection reading an undeclared global variable
[[0;32;49mok[0m]: Globals protection setting an undeclared global*
[[0;32;49mok[0m]: Test an example script DECR_IF_GT
[[0;32;49mok[0m]: Scripting engine resets PRNG at every script execution
[[0;32;49mok[0m]: Scripting engine PRNG can be seeded correctly
[[0;32;49mok[0m]: EVAL does not leak in the Lua stack
[[0;32;49mok[0m]: EVAL processes writes from AOF in read-only slaves
[[0;32;49mok[0m]: We can call scripts rewriting client->argv from Lua
[[0;32;49mok[0m]: Call Redis command with many args from Lua (issue #1764)
[[0;32;49mok[0m]: Number conversion precision test (issue #1118)
[[0;32;49mok[0m]: String containing number precision test (regression of issue #1118)
[[0;32;49mok[0m]: Verify negative arg count is error instead of crash (issue #1842)
[[0;32;49mok[0m]: Correct handling of reused argv (issue #1939)
[[0;32;49mok[0m]: Functions in the Redis namespace are able to report errors
[[0;32;49mok[0m]: Timedout read-only scripts can be killed by SCRIPT KILL
[[0;32;49mok[0m]: Timedout script link is still usable after Lua returns
[[0;32;49mok[0m]: Timedout scripts that modified data can't be killed by SCRIPT KILL
[[0;32;49mok[0m]: SHUTDOWN NOSAVE can kill a timedout script anyway
[[0;32;49mok[0m]: Before the replica connects we issue two EVAL commands (scripts replication)
[[0;32;49mok[0m]: Connect a replica to the master instance (scripts replication)
[[0;32;49mok[0m]: Now use EVALSHA against the master, with both SHAs (scripts replication)
[[0;32;49mok[0m]: If EVALSHA was replicated as EVAL, 'x' should be '4' (scripts replication)
[[0;32;49mok[0m]: Replication of script multiple pushes to list with BLPOP (scripts replication)
[[0;32;49mok[0m]: EVALSHA replication when first call is readonly (scripts replication)
[[0;32;49mok[0m]: Lua scripts using SELECT are replicated correctly (scripts replication)
[[0;32;49mok[0m]: Before the replica connects we issue two EVAL commands (commmands replication)
[[0;32;49mok[0m]: Connect a replica to the master instance (commmands replication)
[[0;32;49mok[0m]: Now use EVALSHA against the master, with both SHAs (commmands replication)
[[0;32;49mok[0m]: If EVALSHA was replicated as EVAL, 'x' should be '4' (commmands replication)
[[0;32;49mok[0m]: Replication of script multiple pushes to list with BLPOP (commmands replication)
[[0;32;49mok[0m]: EVALSHA replication when first call is readonly (commmands replication)
[[0;32;49mok[0m]: Lua scripts using SELECT are replicated correctly (commmands replication)
[[0;32;49mok[0m]: Connect a replica to the master instance
[[0;32;49mok[0m]: Redis.replicate_commands() must be issued before any write
[[0;32;49mok[0m]: Redis.replicate_commands() must be issued before any write (2)
[[0;32;49mok[0m]: Redis.set_repl() must be issued after replicate_commands()
[[0;32;49mok[0m]: Redis.set_repl() don't accept invalid values
[[0;32;49mok[0m]: Test selective replication of certain Redis commands from Lua
[[0;32;49mok[0m]: PRNG is seeded randomly for command replication
[[0;32;49mok[0m]: Using side effects is not a problem with command replication
[37/50 [0;33;49mdone[0m]: unit/scripting (6 seconds)
[1;37;49mTesting unit/maxmemory[0m
[[0;32;49mok[0m]: Without maxmemory small integers are shared
[[0;32;49mok[0m]: With maxmemory and non-LRU policy integers are still shared
[[0;32;49mok[0m]: With maxmemory and LRU policy integers are not shared
[[0;32;49mok[0m]: maxmemory - is the memory limit honoured? (policy allkeys-random)
[[0;32;49mok[0m]: maxmemory - is the memory limit honoured? (policy allkeys-lru)
[[0;32;49mok[0m]: maxmemory - is the memory limit honoured? (policy allkeys-lfu)
[[0;32;49mok[0m]: maxmemory - is the memory limit honoured? (policy volatile-lru)
[[0;32;49mok[0m]: maxmemory - is the memory limit honoured? (policy volatile-lfu)
[[0;32;49mok[0m]: maxmemory - is the memory limit honoured? (policy volatile-random)
[[0;32;49mok[0m]: maxmemory - is the memory limit honoured? (policy volatile-ttl)
[[0;32;49mok[0m]: maxmemory - only allkeys-* should remove non-volatile keys (allkeys-random)
[[0;32;49mok[0m]: maxmemory - only allkeys-* should remove non-volatile keys (allkeys-lru)
[[0;32;49mok[0m]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-lru)
[[0;32;49mok[0m]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-random)
[[0;32;49mok[0m]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-ttl)
[[0;32;49mok[0m]: maxmemory - policy volatile-lru should only remove volatile keys.
[[0;32;49mok[0m]: maxmemory - policy volatile-lfu should only remove volatile keys.
[[0;32;49mok[0m]: maxmemory - policy volatile-random should only remove volatile keys.
[[0;32;49mok[0m]: maxmemory - policy volatile-ttl should only remove volatile keys.
[[0;32;49mok[0m]: slave buffer are counted correctly
[[0;32;49mok[0m]: replica buffer don't induce eviction
[38/50 [0;33;49mdone[0m]: unit/maxmemory (43 seconds)
[1;37;49mTesting unit/introspection[0m
[[0;32;49mok[0m]: CLIENT LIST
[[0;32;49mok[0m]: MONITOR can log executed commands
[[0;32;49mok[0m]: MONITOR can log commands issued by the scripting engine
[[0;32;49mok[0m]: CLIENT GETNAME should return NIL if name is not assigned
[[0;32;49mok[0m]: CLIENT LIST shows empty fields for unassigned names
[[0;32;49mok[0m]: CLIENT SETNAME does not accept spaces
[[0;32;49mok[0m]: CLIENT SETNAME can assign a name to this connection
[[0;32;49mok[0m]: CLIENT SETNAME can change the name of an existing connection
[[0;32;49mok[0m]: After CLIENT SETNAME, connection can still be closed
[39/50 [0;33;49mdone[0m]: unit/introspection (0 seconds)
[1;37;49mTesting unit/introspection-2[0m
[[0;32;49mok[0m]: TTL and TYPYE do not alter the last access time of a key
[[0;32;49mok[0m]: TOUCH alters the last access time of a key
[[0;32;49mok[0m]: TOUCH returns the number of existing keys specified
[[0;32;49mok[0m]: command stats for GEOADD
[[0;32;49mok[0m]: command stats for EXPIRE
[[0;32;49mok[0m]: command stats for BRPOP
[[0;32;49mok[0m]: command stats for MULTI
[[0;32;49mok[0m]: command stats for scripts
[40/50 [0;33;49mdone[0m]: unit/introspection-2 (7 seconds)
[1;37;49mTesting unit/limits[0m
[[0;32;49mok[0m]: Check if maxclients works refusing connections
[41/50 [0;33;49mdone[0m]: unit/limits (1 seconds)
[1;37;49mTesting unit/obuf-limits[0m
[[0;32;49mok[0m]: Client output buffer hard limit is enforced
[[0;32;49mok[0m]: Client output buffer soft limit is not enforced if time is not overreached
[[0;32;49mok[0m]: Client output buffer soft limit is enforced if time is overreached
[42/50 [0;33;49mdone[0m]: unit/obuf-limits (168 seconds)
[1;37;49mTesting unit/bitops[0m
[[0;32;49mok[0m]: BITCOUNT returns 0 against non existing key
[[0;32;49mok[0m]: BITCOUNT returns 0 with out of range indexes
[[0;32;49mok[0m]: BITCOUNT returns 0 with negative indexes where start > end
[[0;32;49mok[0m]: BITCOUNT against test vector #1
[[0;32;49mok[0m]: BITCOUNT against test vector #2
[[0;32;49mok[0m]: BITCOUNT against test vector #3
[[0;32;49mok[0m]: BITCOUNT against test vector #4
[[0;32;49mok[0m]: BITCOUNT against test vector #5
[[0;32;49mok[0m]: BITCOUNT fuzzing without start/end
[[0;32;49mok[0m]: BITCOUNT fuzzing with start/end
[[0;32;49mok[0m]: BITCOUNT with start, end
[[0;32;49mok[0m]: BITCOUNT syntax error #1
[[0;32;49mok[0m]: BITCOUNT regression test for github issue #582
[[0;32;49mok[0m]: BITCOUNT misaligned prefix
[[0;32;49mok[0m]: BITCOUNT misaligned prefix + full words + remainder
[[0;32;49mok[0m]: BITOP NOT (empty string)
[[0;32;49mok[0m]: BITOP NOT (known string)
[[0;32;49mok[0m]: BITOP where dest and target are the same key
[[0;32;49mok[0m]: BITOP AND|OR|XOR don't change the string with single input key
[[0;32;49mok[0m]: BITOP missing key is considered a stream of zero
[[0;32;49mok[0m]: BITOP shorter keys are zero-padded to the key with max length
[[0;32;49mok[0m]: BITOP and fuzzing
[[0;32;49mok[0m]: BITOP or fuzzing
[[0;32;49mok[0m]: BITOP xor fuzzing
[[0;32;49mok[0m]: BITOP NOT fuzzing
[[0;32;49mok[0m]: BITOP with integer encoded source objects
[[0;32;49mok[0m]: BITOP with non string source key
[[0;32;49mok[0m]: BITOP with empty string after non empty string (issue #529)
[[0;32;49mok[0m]: BITPOS bit=0 with empty key returns 0
[[0;32;49mok[0m]: BITPOS bit=1 with empty key returns -1
[[0;32;49mok[0m]: BITPOS bit=0 with string less than 1 word works
[[0;32;49mok[0m]: BITPOS bit=1 with string less than 1 word works
[[0;32;49mok[0m]: BITPOS bit=0 starting at unaligned address
[[0;32;49mok[0m]: BITPOS bit=1 starting at unaligned address
[[0;32;49mok[0m]: BITPOS bit=0 unaligned+full word+reminder
[[0;32;49mok[0m]: BITPOS bit=1 unaligned+full word+reminder
[[0;32;49mok[0m]: BITPOS bit=1 returns -1 if string is all 0 bits
[[0;32;49mok[0m]: BITPOS bit=0 works with intervals
[[0;32;49mok[0m]: BITPOS bit=1 works with intervals
[[0;32;49mok[0m]: BITPOS bit=0 changes behavior if end is given
[[0;32;49mok[0m]: BITPOS bit=1 fuzzy testing using SETBIT
[[0;32;49mok[0m]: BITPOS bit=0 fuzzy testing using SETBIT
[43/50 [0;33;49mdone[0m]: unit/bitops (4 seconds)
[1;37;49mTesting unit/bitfield[0m
[[0;32;49mok[0m]: BITFIELD signed SET and GET basics
[[0;32;49mok[0m]: BITFIELD unsigned SET and GET basics
[[0;32;49mok[0m]: BITFIELD #<idx> form
[[0;32;49mok[0m]: BITFIELD basic INCRBY form
[[0;32;49mok[0m]: BITFIELD chaining of multiple commands
[[0;32;49mok[0m]: BITFIELD unsigned overflow wrap
[[0;32;49mok[0m]: BITFIELD unsigned overflow sat
[[0;32;49mok[0m]: BITFIELD signed overflow wrap
[[0;32;49mok[0m]: BITFIELD signed overflow sat
[[0;32;49mok[0m]: BITFIELD overflow detection fuzzing
[[0;32;49mok[0m]: BITFIELD overflow wrap fuzzing
[[0;32;49mok[0m]: BITFIELD regression for #3221
[[0;32;49mok[0m]: BITFIELD regression for #3564
[44/50 [0;33;49mdone[0m]: unit/bitfield (1 seconds)
[1;37;49mTesting unit/geo[0m
[[0;32;49mok[0m]: GEOADD create
[[0;32;49mok[0m]: GEOADD update
[[0;32;49mok[0m]: GEOADD invalid coordinates
[[0;32;49mok[0m]: GEOADD multi add
[[0;32;49mok[0m]: Check geoset values
[[0;32;49mok[0m]: GEORADIUS simple (sorted)
[[0;32;49mok[0m]: GEORADIUS withdist (sorted)
[[0;32;49mok[0m]: GEORADIUS with COUNT
[[0;32;49mok[0m]: GEORADIUS with COUNT but missing integer argument
[[0;32;49mok[0m]: GEORADIUS with COUNT DESC
[[0;32;49mok[0m]: GEORADIUS HUGE, issue #2767
[[0;32;49mok[0m]: GEORADIUSBYMEMBER simple (sorted)
[[0;32;49mok[0m]: GEORADIUSBYMEMBER withdist (sorted)
[[0;32;49mok[0m]: GEOHASH is able to return geohash strings
[[0;32;49mok[0m]: GEOPOS simple
[[0;32;49mok[0m]: GEOPOS missing element
[[0;32;49mok[0m]: GEODIST simple & unit
[[0;32;49mok[0m]: GEODIST missing elements
[[0;32;49mok[0m]: GEORADIUS STORE option: syntax error
[[0;32;49mok[0m]: GEORANGE STORE option: incompatible options
[[0;32;49mok[0m]: GEORANGE STORE option: plain usage
[[0;32;49mok[0m]: GEORANGE STOREDIST option: plain usage
[[0;32;49mok[0m]: GEORANGE STOREDIST option: COUNT ASC and DESC
[[0;32;49mok[0m]: GEOADD + GEORANGE randomized test
[45/50 [0;33;49mdone[0m]: unit/geo (21 seconds)
[1;37;49mTesting unit/memefficiency[0m
[[0;32;49mok[0m]: Memory efficiency with values in range 32
[[0;32;49mok[0m]: Memory efficiency with values in range 64
[[0;32;49mok[0m]: Memory efficiency with values in range 128
[[0;32;49mok[0m]: Memory efficiency with values in range 1024
[[0;32;49mok[0m]: Memory efficiency with values in range 16384
# Memory
used_memory:104857512
used_memory_human:100.00M
used_memory_rss:167510016
used_memory_rss_human:159.75M
used_memory_peak:160476576
used_memory_peak_human:153.04M
used_memory_peak_perc:65.34%
used_memory_overhead:25811926
used_memory_startup:824224
used_memory_dataset:79045586
used_memory_dataset_perc:75.98%
allocator_allocated:104998960
allocator_active:133365760
allocator_resident:166985728
total_system_memory:10278404096
total_system_memory_human:9.57G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:104857600
maxmemory_human:100.00M
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.27
allocator_frag_bytes:28366800
allocator_rss_ratio:1.25
allocator_rss_bytes:33619968
rss_overhead_ratio:1.00
rss_overhead_bytes:524288
mem_fragmentation_ratio:1.60
mem_fragmentation_bytes:62693528
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:49694
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:67
lazyfree_pending_objects:0
___ Begin jemalloc statistics ___
Version: "5.1.0-0-g0"
Build-time option settings
config.cache_oblivious: true
config.debug: false
config.fill: true
config.lazy_lock: false
config.malloc_conf: ""
config.prof: false
config.prof_libgcc: false
config.prof_libunwind: false
config.stats: true
config.utrace: false
config.xmalloc: false
Run-time option settings
opt.abort: false
opt.abort_conf: false
opt.retain: true
opt.dss: "secondary"
opt.narenas: 128
opt.percpu_arena: "disabled"
opt.metadata_thp: "disabled"
opt.background_thread: false (background_thread: false)
opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
opt.muzzy_decay_ms: 10000 (arenas.muzzy_decay_ms: 10000)
opt.junk: "false"
opt.zero: false
opt.tcache: true
opt.lg_tcache_max: 15
opt.thp: "default"
opt.stats_print: false
opt.stats_print_opts: ""
Arenas: 128
Quantum size: 8
Page size: 65536
Maximum thread-cached size class: 229376
Number of bin size classes: 55
Number of thread-cache bin size classes: 55
Number of large size classes: 180
Allocated: 105254960, active: 133693440, metadata: 5261600 (n_thp 0), resident: 166985728, mapped: 178520064, retained: 56360960
Background threads: 0, num_runs: 0, run_interval: 0 ns
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
background_thread 380 0 0 1 0 0 0
ctl 754 0 0 1 0 0 0
prof 0 0 0 0 0 0 0
arenas[0]:
assigned threads: 1
uptime: 22570024827
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 428 10 46 324
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 96538672 3799144 2132281 7311748
large: 8716288 2 0 2
total: 105254960 3799146 2132281 7311750
active: 133693440
mapped: 178520064
retained: 56360960
base: 5204248
internal: 57352
metadata_thp: 0
tcache_bytes: 340088
resident: 166985728
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 190 0 0 1 0 0 0
extent_avail 1242 0 0 3 0 0 0
extents_dirty 1586 0 0 3 0 0 0
extents_muzzy 1031 0 0 3 0 0 0
extents_retained 1917 0 0 3 0 0 0
decay_dirty 3080 0 0 1 0 0 0
decay_muzzy 3070 0 0 1 0 0 0
base 1367 0 0 3 0 0 0
tcache_list 191 0 0 1 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2208 319 43 445 276 1 8192 1 0.033 8 9 1 0 254 0 0 83 0 0 0
16 1 13405920 1908683 1070813 3264459 837870 219 4096 1 0.934 16618 7219 345 293 4947917 0 0 786235 0 0 0
24 2 9946560 948513 534073 948769 414440 74 8192 3 0.683 8158 3622 86 72 2471570 0 0 390949 0 0 0
32 3 512 112 96 1904246 16 1 2048 1 0.007 3 7 1 0 201 0 0 1 0 0 0
40 4 400 109 99 27 10 1 8192 5 0.001 3 7 1 0 201 0 0 1 0 0 0
48 5 2832 162 103 57 59 1 4096 3 0.014 3 4 1 0 198 0 0 1 0 0 0
56 6 1008 108 90 254614 18 1 8192 7 0.002 4 7 1 0 202 0 0 1 0 0 0
64 7 192 100 97 6 3 1 1024 1 0.002 1 4 1 0 196 0 0 1 0 0 0
80 8 240 100 97 4 3 1 4096 5 0.000 1 4 1 0 196 0 0 1 0 0 0
96 9 9408 200 102 100 98 1 2048 3 0.047 3 4 1 0 198 0 0 1 0 0 0
112 10 112 100 99 3 1 1 4096 7 0.000 1 4 1 0 196 0 0 1 0 0 0
128 11 0 100 100 3 0 0 512 1 1 1 4 1 0 197 0 0 1 0 0 0
160 12 60006400 871006 495966 870857 375040 228 2048 5 0.803 7012 3517 342 230 2227528 0 0 341833 0 0 0
192 13 1152 106 100 1 6 1 1024 3 0.005 2 4 2 0 199 0 0 1 0 0 0
224 14 0 100 100 1 0 0 2048 7 1 1 4 1 0 197 0 0 1 0 0 0
256 15 0 100 100 4 0 0 256 1 1 1 4 1 0 197 0 0 1 0 0 0
320 16 12443520 68242 29356 68121 38886 47 1024 5 0.807 601 220 52 53 225985 0 0 31003 0 0 0
384 17 384 100 99 1 1 1 512 3 0.001 1 4 1 0 196 0 0 1 0 0 0
448 18 0 100 100 1 0 0 1024 7 1 1 4 1 0 197 0 0 1 0 0 0
512 19 512 100 99 4 1 1 128 1 0.007 1 4 1 0 196 0 0 1 0 0 0
640 20 0 100 100 1 0 0 512 5 1 1 4 1 0 197 0 0 1 0 0 0
768 21 0 0 0 0 0 0 256 3 1 0 0 0 0 190 0 0 1 0 0 0
896 22 0 0 0 0 0 0 512 7 1 0 0 0 0 190 0 0 1 0 0 0
---
1024 23 3072 64 61 4 3 1 64 1 0.046 1 3 1 0 195 0 0 1 0 0 0
1280 24 7680 106 100 1 6 1 256 5 0.023 2 4 2 0 199 0 0 1 0 0 0
1536 25 9216 115 109 4 6 1 128 3 0.046 5 8 2 0 206 0 0 1 0 0 0
1792 26 0 0 0 0 0 0 256 7 1 0 0 0 0 190 0 0 1 0 0 0
---
2048 27 14336 36 29 4 7 1 32 1 0.218 2 3 1 0 196 0 0 1 0 0 0
2560 28 256000 100 0 0 100 1 128 5 0.781 1 0 1 0 192 0 0 1 0 0 0
3072 29 0 0 0 0 0 0 64 3 1 0 0 0 0 190 0 0 1 0 0 0
---
3584 30 21504 106 100 1 6 1 128 7 0.046 2 4 2 0 199 0 0 1 0 0 0
4096 31 0 0 0 0 0 0 16 1 1 0 0 0 0 190 0 0 1 0 0 0
5120 32 0 0 0 0 0 0 64 5 1 0 0 0 0 190 0 0 1 0 0 0
6144 33 0 0 0 0 0 0 32 3 1 0 0 0 0 190 0 0 1 0 0 0
7168 34 0 0 0 0 0 0 64 7 1 0 0 0 0 190 0 0 1 0 0 0
8192 35 0 0 0 0 0 0 8 1 1 0 0 0 0 190 0 0 1 0 0 0
10240 36 0 0 0 0 0 0 32 5 1 0 0 0 0 190 0 0 1 0 0 0
12288 37 0 0 0 0 0 0 16 3 1 0 0 0 0 190 0 0 1 0 0 0
14336 38 0 0 0 0 0 0 32 7 1 0 0 0 0 190 0 0 1 0 0 0
---
16384 39 0 10 10 1 0 0 4 1 1 1 2 3 0 199 0 0 1 0 0 0
20480 40 61440 16 13 4 3 1 16 5 0.187 1 2 1 0 194 0 0 1 0 0 0
24576 41 0 0 0 0 0 0 8 3 1 0 0 0 0 190 0 0 1 0 0 0
28672 42 0 0 0 0 0 0 16 7 1 0 0 0 0 190 0 0 1 0 0 0
32768 43 0 0 0 0 0 0 2 1 1 0 0 0 0 190 0 0 1 0 0 0
---
40960 44 40960 10 9 2 1 1 8 5 0.125 1 2 2 0 196 0 0 1 0 0 0
49152 45 0 0 0 0 0 0 4 3 1 0 0 0 0 190 0 0 1 0 0 0
---
57344 46 57344 1 0 1 1 1 8 7 0.125 0 0 1 0 192 0 0 1 0 0 0
65536 47 0 0 0 0 0 0 1 1 1 0 0 0 0 190 0 0 1 0 0 0
---
81920 48 81920 10 9 1 1 1 4 5 0.250 1 2 3 0 198 0 0 1 0 0 0
98304 49 0 0 0 0 0 0 2 3 1 0 0 0 0 190 0 0 1 0 0 0
114688 50 0 0 0 0 0 0 4 7 1 0 0 0 0 190 0 0 1 0 0 0
131072 51 0 0 0 0 0 0 1 2 1 0 0 0 0 190 0 0 1 0 0 0
---
163840 52 163840 10 9 1 1 1 2 5 0.500 1 2 5 0 202 0 0 1 0 0 0
196608 53 0 0 0 0 0 0 1 3 1 0 0 0 0 190 0 0 1 0 0 0
229376 54 0 0 0 0 0 0 2 7 1 0 0 0 0 190 0 0 1 0 0 0
---
large: size ind allocated nmalloc ndalloc nrequests curlextents
---
327680 56 327680 1 0 1 1
---
8388608 75 8388608 1 0 1 1
---
--- End jemalloc statistics ---
[[0;31;49merr[0m]: Active defrag in tests/unit/memefficiency.tcl
defrag didn't stop.
# Memory
used_memory:60252320
used_memory_human:57.46M
used_memory_rss:123863040
used_memory_rss_human:118.12M
used_memory_peak:160476576
used_memory_peak_human:153.04M
used_memory_peak_perc:37.55%
used_memory_overhead:15085544
used_memory_startup:824224
used_memory_dataset:45166776
used_memory_dataset_perc:76.00%
allocator_allocated:60318880
allocator_active:66453504
allocator_resident:122028032
total_system_memory:10278404096
total_system_memory_human:9.57G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.10
allocator_frag_bytes:6134624
allocator_rss_ratio:1.84
allocator_rss_bytes:55574528
rss_overhead_ratio:1.02
rss_overhead_bytes:1835008
mem_fragmentation_ratio:2.06
mem_fragmentation_bytes:63651744
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:66616
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:65
lazyfree_pending_objects:0
___ Begin jemalloc statistics ___
Version: "5.1.0-0-g0"
Build-time option settings
config.cache_oblivious: true
config.debug: false
config.fill: true
config.lazy_lock: false
config.malloc_conf: ""
config.prof: false
config.prof_libgcc: false
config.prof_libunwind: false
config.stats: true
config.utrace: false
config.xmalloc: false
Run-time option settings
opt.abort: false
opt.abort_conf: false
opt.retain: true
opt.dss: "secondary"
opt.narenas: 128
opt.percpu_arena: "disabled"
opt.metadata_thp: "disabled"
opt.background_thread: false (background_thread: false)
opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
opt.muzzy_decay_ms: 10000 (arenas.muzzy_decay_ms: 10000)
opt.junk: "false"
opt.zero: false
opt.tcache: true
opt.lg_tcache_max: 15
opt.thp: "default"
opt.stats_print: false
opt.stats_print_opts: ""
Arenas: 128
Quantum size: 8
Page size: 65536
Maximum thread-cached size class: 229376
Number of bin size classes: 55
Number of thread-cache bin size classes: 55
Number of large size classes: 180
Allocated: 60326560, active: 66453504, metadata: 5261600 (n_thp 0), resident: 122028032, mapped: 133562368, retained: 101318656
Background threads: 0, num_runs: 0, run_interval: 0 ns
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
background_thread 2213 0 0 1 0 0 0
ctl 9784 0 0 1 0 0 0
prof 0 0 0 0 0 0 0
arenas[0]:
assigned threads: 1
uptime: 117400141402
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 768 32 194 1754
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 55804576 6588398 5486841 20565728
large: 4521984 7 5 7
total: 60326560 6588405 5486846 20565735
active: 66453504
mapped: 133562368
retained: 101318656
base: 5204248
internal: 57352
metadata_thp: 0
tcache_bytes: 16880
resident: 122028032
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 1106 0 0 1 0 0 0
extent_avail 3177 0 0 3 0 0 0
extents_dirty 4734 0 0 3 0 0 0
extents_muzzy 2147 0 0 3 0 0 0
extents_retained 3491 0 0 3 0 0 0
decay_dirty 8000 0 0 1 0 0 0
decay_muzzy 7968 0 0 1 0 0 0
base 3199 0 0 3 0 0 0
tcache_list 1107 0 0 1 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2082272 682452 422168 737512 260284 33 8192 1 0.962 5729 2899 63 35 5292332 0 0 342741 0 0 0
16 1 4198176 2585949 2323563 7404066 262386 66 4096 1 0.970 21738 18490 441 390 10099741 0 0 1131829 0 0 0
24 2 7502400 1681985 1369385 3943160 312600 42 8192 3 0.908 13940 10656 109 156 8756245 0 0 734355 0 0 0
32 3 64896 2520 492 6441884 2028 1 2048 1 0.990 133 110 1 0 39521 0 0 153 0 0 0
40 4 300080 9578 2076 59531 7502 1 8192 5 0.915 180 76 1 0 143711 0 0 155 0 0 0
48 5 2448 200 149 2339 51 1 4096 3 0.012 29 33 1 0 1169 0 0 1 0 0 0
56 6 105728 2607 719 350162 1888 1 8192 7 0.230 116 104 1 0 36743 0 0 79 0 0 0
64 7 320 114 109 2039 5 1 1024 1 0.004 8 12 1 0 1146 0 0 39 0 0 0
80 8 40560 1010 503 4518 507 1 4096 5 0.123 89 93 1 0 10865 0 0 41 0 0 0
96 9 191424 2762 768 4218 1994 1 2048 3 0.973 98 100 1 0 37291 0 0 193 0 0 0
112 10 252336 4742 2489 2262 2253 1 4096 7 0.550 99 98 1 0 44092 0 0 117 0 0 0
128 11 384 126 123 17 3 1 512 1 0.005 10 16 2 0 1192 0 0 41 0 0 0
160 12 40000320 1543928 1293926 1543761 250002 123 2048 5 0.992 12149 10159 588 359 7336648 0 0 687995 0 0 0
192 13 192 115 114 168 1 1 1024 3 0.000 9 13 9 0 1145 0 0 1 0 0 0
224 14 0 114 114 8 0 0 2048 7 1 6 11 6 0 1135 0 0 1 0 0 0
256 15 0 116 116 15 0 0 256 1 1 8 13 8 0 1143 0 0 1 0 0 0
320 16 5120 68246 68230 68128 16 1 1024 5 0.015 605 699 52 55 227430 0 0 31003 0 0 0
384 17 384 116 115 9 1 1 512 3 0.001 8 13 1 0 1128 0 0 1 0 0 0
448 18 0 113 113 8 0 0 1024 7 1 7 12 7 0 1139 0 0 1 0 0 0
512 19 512 113 112 15 1 1 128 1 0.007 9 14 1 0 1130 0 0 1 0 0 0
640 20 0 113 113 8 0 0 512 5 1 7 12 7 0 1139 0 0 1 0 0 0
768 21 0 113 113 8 0 0 256 3 1 7 12 7 0 1139 0 0 1 0 0 0
896 22 0 110 110 7 0 0 512 7 1 6 11 6 0 1135 0 0 1 0 0 0
1024 23 3072 81 78 15 3 1 64 1 0.046 9 14 1 0 1130 0 0 1 0 0 0
1280 24 3840 125 122 563 3 1 256 5 0.011 11 15 4 0 1177 0 0 77 0 0 0
1536 25 4608 125 122 189 3 1 128 3 0.023 15 17 3 0 1143 0 0 1 0 0 0
1792 26 0 109 109 6 0 0 256 7 1 5 10 5 0 1131 0 0 1 0 0 0
2048 27 8192 46 42 173 4 1 32 1 0.125 11 14 1 0 1132 0 0 1 0 0 0
2560 28 17920 121 114 7 7 1 128 5 0.054 7 10 3 0 1204 0 0 39 0 0 0
3072 29 0 68 68 3 0 0 64 3 1 2 6 2 0 1118 0 0 1 0 0 0
3584 30 3584 115 114 592 1 1 128 7 0.007 5 9 5 0 1129 0 0 1 0 0 0
4096 31 0 21 21 5 0 0 16 1 1 3 6 2 0 1119 0 0 1 0 0 0
5120 32 0 64 64 1 0 0 64 5 1 1 4 1 0 1113 0 0 1 0 0 0
6144 33 0 36 36 2 0 0 32 3 1 2 5 2 0 1117 0 0 1 0 0 0
7168 34 0 64 64 1 0 0 64 7 1 1 4 1 0 1113 0 0 1 0 0 0
8192 35 0 15 15 5 0 0 8 1 1 4 6 4 0 1124 0 0 1 0 0 0
10240 36 0 32 32 1 0 0 32 5 1 1 3 1 0 1112 0 0 1 0 0 0
12288 37 0 18 18 2 0 0 16 3 1 2 5 2 0 1117 0 0 1 0 0 0
14336 38 0 0 0 0 0 0 32 7 1 0 0 0 0 1106 0 0 1 0 0 0
---
16384 39 0 15 15 5 0 0 4 1 1 4 6 5 0 1126 0 0 1 0 0 0
20480 40 81920 22 18 8 4 1 16 5 0.250 3 5 1 0 1115 0 0 1 0 0 0
24576 41 0 10 10 1 0 0 8 3 1 1 2 2 0 1113 0 0 1 0 0 0
28672 42 0 0 0 0 0 0 16 7 1 0 0 0 0 1106 0 0 1 0 0 0
---
32768 43 0 11 11 4 0 0 2 1 1 2 5 6 0 1125 0 0 1 0 0 0
40960 44 40960 13 12 291 1 1 8 5 0.125 3 5 2 1 1117 0 0 1 0 0 0
49152 45 0 0 0 0 0 0 4 3 1 0 0 0 0 1106 0 0 1 0 0 0
---
57344 46 57344 1 0 1 1 1 8 7 0.125 0 0 1 0 1108 0 0 1 0 0 0
65536 47 196608 12 9 4 3 3 1 1 1 2 4 12 0 1190 0 0 115 0 0 0
81920 48 81920 10 9 1 1 1 4 5 0.250 1 2 3 0 1114 0 0 1 0 0 0
98304 49 0 0 0 0 0 0 2 3 1 0 0 0 0 1106 0 0 1 0 0 0
114688 50 0 0 0 0 0 0 4 7 1 0 0 0 0 1106 0 0 1 0 0 0
---
131072 51 393216 12 9 4 3 3 1 2 1 2 4 12 0 1190 0 0 115 0 0 0
163840 52 163840 10 9 1 1 1 2 5 0.500 1 2 5 0 1118 0 0 1 0 0 0
196608 53 0 0 0 0 0 0 1 3 1 0 0 0 0 1106 0 0 1 0 0 0
229376 54 0 0 0 0 0 0 2 7 1 0 0 0 0 1106 0 0 1 0 0 0
---
large: size ind allocated nmalloc ndalloc nrequests curlextents
262144 55 0 1 1 1 0
327680 56 327680 1 0 1 1
---
524288 59 0 1 1 1 0
---
1048576 63 0 1 1 1 0
---
2097152 67 0 1 1 1 0
---
4194304 71 4194304 1 0 1 1
---
8388608 75 0 1 1 1 0
---
--- End jemalloc statistics ---
[[0;31;49merr[0m]: Active defrag big keys in tests/unit/memefficiency.tcl
defrag didn't stop.
[46/50 [0;33;49mdone[0m]: unit/memefficiency (122 seconds)
[1;37;49mTesting unit/hyperloglog[0m
[[0;32;49mok[0m]: HyperLogLog self test passes
[[0;32;49mok[0m]: PFADD without arguments creates an HLL value
[[0;32;49mok[0m]: Approximated cardinality after creation is zero
[[0;32;49mok[0m]: PFADD returns 1 when at least 1 reg was modified
[[0;32;49mok[0m]: PFADD returns 0 when no reg was modified
[[0;32;49mok[0m]: PFADD works with empty string (regression)
[[0;32;49mok[0m]: PFCOUNT returns approximated cardinality of set
[[0;32;49mok[0m]: HyperLogLogs are promote from sparse to dense
[[0;32;49mok[0m]: HyperLogLog sparse encoding stress test
[[0;32;49mok[0m]: Corrupted sparse HyperLogLogs are detected: Additionl at tail
[[0;32;49mok[0m]: Corrupted sparse HyperLogLogs are detected: Broken magic
[[0;32;49mok[0m]: Corrupted sparse HyperLogLogs are detected: Invalid encoding
[[0;32;49mok[0m]: Corrupted dense HyperLogLogs are detected: Wrong length
[[0;32;49mok[0m]: Fuzzing dense/sparse encoding: Redis should always detect errors
[[0;32;49mok[0m]: PFADD, PFCOUNT, PFMERGE type checking works
[[0;32;49mok[0m]: PFMERGE results on the cardinality of union of sets
[[0;32;49mok[0m]: PFCOUNT multiple-keys merge returns cardinality of union #1
[[0;32;49mok[0m]: PFCOUNT multiple-keys merge returns cardinality of union #2
[[0;32;49mok[0m]: PFDEBUG GETREG returns the HyperLogLog raw registers
[[0;32;49mok[0m]: PFADD / PFCOUNT cache invalidation works
[47/50 [0;33;49mdone[0m]: unit/hyperloglog (53 seconds)
[1;37;49mTesting unit/lazyfree[0m
[[0;32;49mok[0m]: UNLINK can reclaim memory in background
[[0;32;49mok[0m]: FLUSHDB ASYNC can reclaim memory in background
[48/50 [0;33;49mdone[0m]: unit/lazyfree (1 seconds)
[1;37;49mTesting unit/wait[0m
[[0;32;49mok[0m]: Setup slave
[[0;32;49mok[0m]: WAIT should acknowledge 1 additional copy of the data
[[0;32;49mok[0m]: WAIT should not acknowledge 2 additional copies of the data
[[0;32;49mok[0m]: WAIT should not acknowledge 1 additional copy if slave is blocked
[49/50 [0;33;49mdone[0m]: unit/wait (7 seconds)
[1;37;49mTesting unit/pendingquerybuf[0m
[[0;32;49mok[0m]: pending querybuf: check size of pending_querybuf after set a big value
[50/50 [0;33;49mdone[0m]: unit/pendingquerybuf (7 seconds)
The End
Execution time of different units:
1 seconds - unit/printver
27 seconds - unit/dump
1 seconds - unit/auth
0 seconds - unit/protocol
2 seconds - unit/keyspace
8 seconds - unit/scan
12 seconds - unit/type/string
0 seconds - unit/type/incr
13 seconds - unit/type/list
17 seconds - unit/type/list-2
103 seconds - unit/type/list-3
7 seconds - unit/type/set
13 seconds - unit/type/zset
5 seconds - unit/type/hash
29 seconds - unit/type/stream
3 seconds - unit/type/stream-cgroups
9 seconds - unit/sort
15 seconds - unit/expire
9 seconds - unit/other
2 seconds - unit/multi
0 seconds - unit/quit
97 seconds - unit/aofrw
27 seconds - integration/block-repl
148 seconds - integration/replication
16 seconds - integration/replication-2
32 seconds - integration/replication-3
34 seconds - integration/replication-4
100 seconds - integration/replication-psync
3 seconds - integration/aof
2 seconds - integration/rdb
0 seconds - integration/convert-zipmap-hash-on-load
1 seconds - integration/logging
28 seconds - integration/psync2
23 seconds - integration/psync2-reg
0 seconds - unit/pubsub
2 seconds - unit/slowlog
6 seconds - unit/scripting
43 seconds - unit/maxmemory
0 seconds - unit/introspection
7 seconds - unit/introspection-2
1 seconds - unit/limits
168 seconds - unit/obuf-limits
4 seconds - unit/bitops
1 seconds - unit/bitfield
21 seconds - unit/geo
122 seconds - unit/memefficiency
53 seconds - unit/hyperloglog
1 seconds - unit/lazyfree
7 seconds - unit/wait
7 seconds - unit/pendingquerybuf
[1;31;49m!!! WARNING[0m The following tests failed:
*** [[0;31;49merr[0m]: Active defrag in tests/unit/memefficiency.tcl
defrag didn't stop.
*** [[0;31;49merr[0m]: Active defrag big keys in tests/unit/memefficiency.tcl
defrag didn't stop.
Cleanup: may take some time... OK
Comment From: oranagra
@moria7757 is it failing consistently? any chance you can check Redis 6.0 or 6.2? (this test was improved)
Comment From: moria7757
@moria7757 is it failing consistently? any chance you can check Redis 6.0 or 6.2? (this test was improved)
yes i have executed many times.
Redis Version: 6.2-rc1
make test output:
cd src && make test
make[1]: Entering directory `/opt/redis-6.2-rc1/src'
CC Makefile.dep
make[1]: Leaving directory `/opt/redis-6.2-rc1/src'
make[1]: Entering directory `/opt/redis-6.2-rc1/src'
Cleanup: may take some time... OK
Starting test server at port 21079
[ready]: 10326
Testing unit/printver
[ready]: 10321
Testing unit/dump
[ready]: 10323
Testing unit/auth
[ready]: 10319
Testing unit/protocol
[ready]: 10332
Testing unit/keyspace
[ready]: 10335
Testing unit/scan
[ready]: 10329
Testing unit/type/string
[ready]: 10338
Testing unit/type/incr
[ready]: 10341
Testing unit/type/list
[ready]: 10345
Testing unit/type/list-2
[ready]: 10347
Testing unit/type/list-3
[ready]: 10353
Testing unit/type/set
[ready]: 10350
Testing unit/type/zset
[ready]: 10357
Testing unit/type/hash
[ready]: 10360
Testing unit/type/stream
[ready]: 10362
Testing unit/type/stream-cgroups
[ok]: DEL against a single item
[ok]: INCR against non existing key
[ok]: INCR against key created by incr itself
[ok]: Vararg DEL
[ok]: INCR against key originally set with SET
[ok]: KEYS with pattern
[ok]: INCR over 32bit value
[ok]: KEYS to get all keys
[ok]: DBSIZE
[ok]: INCRBY over 32bit value with over 32bit increment
[ok]: DEL all keys
[ok]: INCR fails against key with spaces (left)
[ok]: INCR fails against key with spaces (right)
[ok]: INCR fails against key with spaces (both)
[ok]: INCR fails against a key holding a list
[ok]: DECRBY over 32bit value with over 32bit increment, negative res
[ok]: INCR uses shared objects in the 0-9999 range
[ok]: INCR can modify objects in-place
[ok]: INCRBYFLOAT against non existing key
[ok]: INCRBYFLOAT against key originally set with SET
[ok]: INCRBYFLOAT over 32bit value
[ok]: INCRBYFLOAT over 32bit value with over 32bit increment
[ok]: INCRBYFLOAT fails against key with spaces (left)
[ok]: INCRBYFLOAT fails against key with spaces (right)
[ok]: INCRBYFLOAT fails against key with spaces (both)
[ok]: INCRBYFLOAT fails against a key holding a list
[ok]: INCRBYFLOAT does not allow NaN or Infinity
[ok]: INCRBYFLOAT decrement
[ok]: string to double with null terminator
[ok]: No negative zero
[ok]: SET and GET an item
[ok]: SET and GET an empty item
[ok]: DUMP / RESTORE are able to serialize / unserialize a simple key
[ok]: RESTORE can set an arbitrary expire to the materialized key
[ok]: RESTORE can set an expire that overflows a 32 bit integer
Testing Redis version 6.1.240 (00000000)
[ok]: RESTORE can set an absolute expire
[ok]: AUTH fails if there is no password configured server side
[ok]: RESTORE with ABSTTL in the past
[ok]: LPOS basic usage
[ok]: RESTORE can set LRU
[ok]: RESTORE can set LFU
[ok]: LPOS RANK (positive and negative rank) option
[ok]: RESTORE returns an error of the key already exists
[ok]: LPOS COUNT option
[ok]: LPOS COUNT + RANK option
[ok]: Explicit regression for a list bug
[ok]: LPOS non existing key
[ok]: LPOS no match
[ok]: RESTORE can overwrite an existing key with REPLACE
[ok]: RESTORE can detect a syntax error for unrecongized options
[ok]: DUMP of non existing key returns nil
[ok]: LPOS MAXLEN
[ok]: LPOS when RANK is greater than matches
[ok]: HSET/HLEN - Small hash creation
[ok]: Is the small hash encoded with a ziplist?
[ok]: XADD can add entries into a stream that XRANGE can fetch
[ok]: LPUSH, RPUSH, LLENGTH, LINDEX, LPOP - ziplist
[ok]: SADD, SCARD, SISMEMBER, SMISMEMBER, SMEMBERS basics - regular set
[ok]: XADD IDs are incremental
[ok]: XADD IDs are incremental when ms is the same as well
[ok]: XADD IDs correctly report an error when overflowing
[ok]: SADD, SCARD, SISMEMBER, SMISMEMBER, SMEMBERS basics - intset
[ok]: LPUSH, RPUSH, LLENGTH, LINDEX, LPOP - regular list
[ok]: R/LPOP against empty list
[ok]: SMISMEMBER against non set
[ok]: XGROUP CREATE: creation and duplicate group name detection
[ok]: SMISMEMBER non existing key
[ok]: Check encoding - ziplist
[ok]: SMISMEMBER requires one or more members
[ok]: ZSET basic ZADD and score update - ziplist
[ok]: XGROUP CREATE: automatic stream creation fails without MKSTREAM
[ok]: Variadic RPUSH/LPUSH
[ok]: SADD against non set
[ok]: ZSET element can't be set to NaN with ZADD - ziplist
[ok]: XGROUP CREATE: automatic stream creation works with MKSTREAM
[ok]: SADD a non-integer against an intset
[ok]: DEL a list
[ok]: ZSET element can't be set to NaN with ZINCRBY
[ok]: SADD an integer larger than 64 bits
[ok]: Handle an empty query
[ok]: ZADD with options syntax error with incomplete pair
[ok]: ZADD XX option without key - ziplist
[ok]: XREADGROUP will return only new elements
[ok]: ZADD XX existing key - ziplist
[ok]: ZADD XX returns the number of elements actually added
[ok]: ZADD XX updates existing elements score
[ok]: ZADD GT updates existing elements when new scores are greater
[ok]: ZADD LT updates existing elements when new scores are lower
[ok]: BLPOP, BRPOP: single existing list - linkedlist
[ok]: XREADGROUP can read the history of the elements we own
[ok]: ZADD GT XX updates existing elements when new scores are greater and skips new elements
[ok]: XPENDING is able to return pending items
[ok]: ZADD LT XX updates existing elements when new scores are lower and skips new elements
[ok]: XPENDING can return single consumer items
[ok]: ZADD XX and NX are not compatible
[ok]: XPENDING only group
[ok]: ZADD NX with non existing key
[ok]: ZADD NX only add new elements without updating old ones
[ok]: BLPOP, BRPOP: multiple existing lists - linkedlist
[ok]: ZADD GT and NX are not compatible
[ok]: ZADD LT and NX are not compatible
[ok]: ZADD LT and GT are not compatible
[ok]: ZADD INCR works like ZINCRBY
[ok]: ZADD INCR works with a single score-elemenet pair
[ok]: BLPOP, BRPOP: second list has an entry - linkedlist
[ok]: ZADD CH option changes return value to all changed elements
[ok]: ZINCRBY calls leading to NaN result in error
[ok]: ZADD - Variadic version base case
[ok]: ZADD - Return value is the number of actually added items
[ok]: ZADD - Variadic version does not add nothing on single parsing err
[ok]: BRPOPLPUSH - linkedlist
[ok]: ZADD - Variadic version will raise error on missing arg
[ok]: ZINCRBY does not work variadic even if shares ZADD implementation
[ok]: ZCARD basics - ziplist
[ok]: SCAN basic
[ok]: ZREM removes key after last element is removed
[ok]: BLMOVE left left - linkedlist
[ok]: ZREM variadic version
[ok]: ZREM variadic version -- remove elements after key deletion
[ok]: BLMOVE left right - linkedlist
[ok]: ZRANGE basics - ziplist
[ok]: ZREVRANGE basics - ziplist
[ok]: ZRANK/ZREVRANK basics - ziplist
[ok]: ZRANK - after deletion - ziplist
[ok]: ZINCRBY - can create a new sorted set - ziplist
[ok]: ZINCRBY - increment and decrement - ziplist
[ok]: BLMOVE right left - linkedlist
[ok]: ZINCRBY return value
[ok]: XPENDING with IDLE
[ok]: XPENDING with exclusive range intervals works as expected
[ok]: BLMOVE right right - linkedlist
[ok]: XACK is able to remove items from the client/group PEL
[ok]: XACK can't remove the same item multiple times
[ok]: XACK is able to accept multiple arguments
[ok]: XACK should fail if got at least one invalid ID
[ok]: BLPOP, BRPOP: single existing list - ziplist
[ok]: PEL NACK reassignment after XGROUP SETID event
[ok]: Negative multibulk length
[ok]: ZRANGEBYSCORE/ZREVRANGEBYSCORE/ZCOUNT basics
[ok]: Out of range multibulk length
[ok]: XREADGROUP will not report data on empty history. Bug #5577
[ok]: ZRANGEBYSCORE with WITHSCORES
[ok]: Wrong multibulk payload header
[ok]: BLPOP, BRPOP: multiple existing lists - ziplist
[ok]: Negative multibulk payload length
[ok]: XREADGROUP history reporting of deleted entries. Bug #5570
[ok]: Out of range multibulk payload length
[ok]: Non-number multibulk payload length
[ok]: BLPOP, BRPOP: second list has an entry - ziplist
[ok]: ZRANGEBYSCORE with LIMIT
[ok]: Multi bulk request not followed by bulk arguments
[ok]: ZRANGEBYSCORE with LIMIT and WITHSCORES
[ok]: Generic wrong number of args
[ok]: ZRANGEBYSCORE with non-value min or max
[ok]: BRPOPLPUSH - ziplist
[ok]: Unbalanced number of quotes
[ok]: BLMOVE left left - ziplist
[ok]: BLMOVE left right - ziplist
[ok]: BLMOVE right left - ziplist
[ok]: ZRANGEBYLEX/ZREVRANGEBYLEX/ZLEXCOUNT basics
[ok]: Regression for quicklist #3343 bug
[ok]: BLMOVE right right - ziplist
[ok]: ZLEXCOUNT advanced
[ok]: BLPOP, LPUSH + DEL should not awake blocked client
[ok]: Blocking XREADGROUP will not reply with an empty array
[ok]: XGROUP DESTROY should unblock XREADGROUP with -NOGROUP
[ok]: ZRANGEBYSLEX with LIMIT
[ok]: ZRANGEBYLEX with invalid lex range specifiers
[ok]: RENAME can unblock XREADGROUP with data
[ok]: RENAME can unblock XREADGROUP with -NOGROUP
[ok]: ZREMRANGEBYSCORE basics
[ok]: ZREMRANGEBYSCORE with non-value min or max
[ok]: ZREMRANGEBYRANK basics
[ok]: ZUNIONSTORE against non-existing key doesn't set destination - ziplist
[ok]: ZUNION/ZINTER/ZDIFF against non-existing key - ziplist
[1/61 done]: unit/type/incr (0 seconds)
Testing unit/sort
[ok]: ZUNIONSTORE with empty set - ziplist
[2/61 done]: unit/printver (0 seconds)
Testing unit/expire
[ok]: ZUNION/ZINTER/ZDIFF with empty set - ziplist
[ok]: ZUNIONSTORE basics - ziplist
[ok]: ZUNION/ZINTER/ZDIFF with integer members - ziplist
[ok]: ZUNIONSTORE with weights - ziplist
[ok]: ZUNION with weights - ziplist
[ok]: ZUNIONSTORE with a regular set and weights - ziplist
[ok]: ZUNIONSTORE with AGGREGATE MIN - ziplist
[ok]: SADD overflows the maximum allowed integers in an intset
[ok]: ZUNION/ZINTER with AGGREGATE MIN - ziplist
[ok]: ZUNIONSTORE with AGGREGATE MAX - ziplist
[ok]: Variadic SADD
[ok]: ZUNION/ZINTER with AGGREGATE MAX - ziplist
[ok]: ZINTERSTORE basics - ziplist
[ok]: SCAN COUNT
[ok]: ZINTER basics - ziplist
[ok]: ZINTERSTORE with weights - ziplist
[ok]: ZINTER with weights - ziplist
[ok]: ZINTERSTORE with a regular set and weights - ziplist
[ok]: ZINTERSTORE with AGGREGATE MIN - ziplist
[ok]: ZINTERSTORE with AGGREGATE MAX - ziplist
[ok]: ZUNIONSTORE with +inf/-inf scores - ziplist
[ok]: ZUNIONSTORE with NaN weights ziplist
[ok]: ZINTERSTORE with +inf/-inf scores - ziplist
[ok]: ZINTERSTORE with NaN weights ziplist
[ok]: ZDIFFSTORE basics - ziplist
[ok]: ZDIFF basics - ziplist
[ok]: ZDIFFSTORE with a regular set - ziplist
[ok]: ZDIFF subtracting set from itself - ziplist
[ok]: ZDIFF algorithm 1 - ziplist
[ok]: ZDIFF algorithm 2 - ziplist
[ok]: Very big payload in GET/SET
[ok]: SCAN MATCH
[ok]: BLPOP, LPUSH + DEL + SET should not awake blocked client
[ok]: BLPOP with same key multiple times should work (issue #801)
[ok]: MULTI/EXEC is isolated from the point of view of BLPOP
[ok]: BLPOP with variadic LPUSH
[ok]: BRPOPLPUSH with zero timeout should block indefinitely
[ok]: BLMOVE left left with zero timeout should block indefinitely
[ok]: BLMOVE left right with zero timeout should block indefinitely
[ok]: BLMOVE right left with zero timeout should block indefinitely
[ok]: BLMOVE right right with zero timeout should block indefinitely
[ok]: BLMOVE (left, left) with a client BLPOPing the target list
[ok]: BLMOVE (left, right) with a client BLPOPing the target list
[ok]: BLMOVE (right, left) with a client BLPOPing the target list
[ok]: BLMOVE (right, right) with a client BLPOPing the target list
[ok]: BRPOPLPUSH with wrong source type
[ok]: BRPOPLPUSH with wrong destination type
[ok]: BRPOPLPUSH maintains order of elements after failure
[ok]: BRPOPLPUSH with multiple blocked clients
[ok]: Linked LMOVEs
[ok]: Circular BRPOPLPUSH
[ok]: Self-referential BRPOPLPUSH
[ok]: BRPOPLPUSH inside a transaction
[ok]: PUSH resulting from BRPOPLPUSH affect WATCH
[ok]: BRPOPLPUSH does not affect WATCH while still blocked
[ok]: Protocol desync regression test #1
[ok]: AUTH fails when a wrong password is given
[ok]: Arbitrary command gives an error when AUTH is required
[ok]: AUTH succeeds when the right password is given
[ok]: MIGRATE is caching connections
[ok]: EXPIRE - set timeouts multiple times
[ok]: EXPIRE - It should be still possible to read 'x'
[ok]: Once AUTH succeeded we can actually send commands to the server
[ok]: Old Ziplist: SORT BY key
[ok]: Old Ziplist: SORT BY key with limit
[ok]: Old Ziplist: SORT BY hash field
[ok]: SCAN TYPE
[ok]: SSCAN with encoding intset
[ok]: SSCAN with encoding hashtable
[ok]: HSCAN with encoding ziplist
[ok]: XADD with MAXLEN option
[ok]: HSET/HLEN - Big hash creation
[ok]: Is the big hash encoded with an hash table?
[ok]: HGET against the small hash
[3/61 done]: unit/auth (1 seconds)
Testing unit/other
[ok]: HSCAN with encoding hashtable
[ok]: ZSCAN with encoding ziplist
[ok]: Protocol desync regression test #2
[ok]: Set encoding after DEBUG RELOAD
[ok]: SREM basics - regular set
[ok]: SREM basics - intset
[ok]: SAVE - make sure there are all the types as values
[ok]: SREM with multiple arguments
[ok]: SREM variadic version with more args needed to destroy the key
[ok]: HGET against the big hash
[ok]: HGET against non existing key
[ok]: HSET in update and insert mode
[ok]: HSETNX target key missing - small hash
[ok]: HSETNX target key exists - small hash
[ok]: HSETNX target key missing - big hash
[ok]: HSETNX target key exists - big hash
[ok]: HMSET wrong number of args
[ok]: HMSET - small hash
[ok]: ZSCAN with encoding skiplist
[ok]: XADD with MAXLEN option and the '=' argument
[ok]: SCAN guarantees check under write load
[ok]: SSCAN with integer encoded object (issue #1345)
[ok]: SSCAN with PATTERN
[ok]: HSCAN with PATTERN
[ok]: ZSCAN with PATTERN
[ok]: HMSET - big hash
[ok]: HMGET against non existing key and fields
[ok]: HMGET against wrong type
[ok]: HMGET - small hash
[ok]: HMGET - big hash
[ok]: HKEYS - small hash
[ok]: Protocol desync regression test #3
[ok]: XCLAIM can claim PEL items from another consumer
[ok]: Generated sets must be encoded as hashtable
[ok]: SINTER with two sets - hashtable
[ok]: SINTERSTORE with two sets - hashtable
[ok]: ZSCAN scores: regression test for issue #2175
[ok]: HKEYS - big hash
[ok]: HVALS - small hash
[ok]: SINTERSTORE with two sets, after a DEBUG RELOAD - hashtable
[ok]: HVALS - big hash
[ok]: HGETALL - small hash
[ok]: SUNION with two sets - hashtable
[ok]: SUNIONSTORE with two sets - hashtable
[ok]: SINTER against three sets - hashtable
[ok]: SINTERSTORE with three sets - hashtable
[ok]: XADD with MAXLEN option and the '~' argument
[ok]: XADD with NOMKSTREAM option
[ok]: HGETALL - big hash
[ok]: HDEL and return value
[ok]: HDEL - more than a single value
[ok]: HDEL - hash becomes empty before deleting all specified fields
[ok]: HEXISTS
[ok]: Is a ziplist encoded Hash promoted on big payload?
[ok]: HINCRBY against non existing database key
[ok]: HINCRBY against non existing hash key
[ok]: HINCRBY against hash key created by hincrby itself
[ok]: HINCRBY against hash key originally set with HSET
[ok]: SUNION with non existing keys - hashtable
[ok]: HINCRBY over 32bit value
[ok]: SDIFF with two sets - hashtable
[ok]: HINCRBY over 32bit value with over 32bit increment
[ok]: SDIFF with three sets - hashtable
[ok]: SDIFFSTORE with three sets - hashtable
[ok]: HINCRBY fails against hash value with spaces (left)
[ok]: HINCRBY fails against hash value with spaces (right)
[ok]: HINCRBY can detect overflows
[ok]: HINCRBYFLOAT against non existing database key
[ok]: HINCRBYFLOAT against non existing hash key
[ok]: HINCRBYFLOAT against hash key created by hincrby itself
[ok]: HINCRBYFLOAT against hash key originally set with HSET
[ok]: HINCRBYFLOAT over 32bit value
[ok]: HINCRBYFLOAT over 32bit value with over 32bit increment
[ok]: HINCRBYFLOAT fails against hash value with spaces (left)
[ok]: HINCRBYFLOAT fails against hash value with spaces (right)
[ok]: HINCRBYFLOAT fails against hash value that contains a null-terminator in the middle
[ok]: HSTRLEN against the small hash
[ok]: Regression for a crash with blocking ops and pipelining
[ok]: Generated sets must be encoded as intset
[ok]: SINTER with two sets - intset
[ok]: SINTERSTORE with two sets - intset
[ok]: SINTERSTORE with two sets, after a DEBUG RELOAD - intset
[ok]: Old Linked list: SORT BY key
[ok]: Old Linked list: SORT BY key with limit
[ok]: SUNION with two sets - intset
[ok]: HSTRLEN against the big hash
[ok]: HSTRLEN against non existing field
[ok]: HSTRLEN corner cases
[ok]: Hash ziplist regression test for large keys
[ok]: SUNIONSTORE with two sets - intset
[ok]: SINTER against three sets - intset
[ok]: SINTERSTORE with three sets - intset
[ok]: Old Linked list: SORT BY hash field
[ok]: DEL against expired key
[ok]: EXISTS
[ok]: Zero length value in key. SET/GET/EXISTS
[ok]: Commands pipelining
[ok]: Non existing command
[4/61 done]: unit/protocol (1 seconds)
Testing unit/multi
[ok]: RENAME basic usage
[ok]: RENAME source key should no longer exist
[ok]: RENAME against already existing key
[ok]: RENAMENX basic usage
[ok]: RENAMENX against already existing key
[ok]: RENAMENX against already existing key (2)
[ok]: RENAME against non existing source key
[ok]: RENAME where source and dest key are the same (existing)
[ok]: RENAMENX where source and dest key are the same (existing)
[ok]: RENAME where source and dest key are the same (non existing)
[ok]: RENAME with volatile key, should move the TTL as well
[ok]: RENAME with volatile key, should not inherit TTL of target key
[ok]: DEL all keys again (DB 0)
[ok]: SUNION with non existing keys - intset
[ok]: SDIFF with two sets - intset
[ok]: DEL all keys again (DB 1)
[ok]: SDIFF with three sets - intset
[ok]: SDIFFSTORE with three sets - intset
[ok]: COPY basic usage for string
[ok]: SDIFF with first set empty
[ok]: COPY for string does not replace an existing key without REPLACE option
[ok]: SDIFF with same set two times
[ok]: COPY for string can replace an existing key with REPLACE option
[ok]: COPY for string ensures that copied data is independent of copying data
[ok]: COPY for string does not copy data to no-integer DB
[ok]: COPY can copy key expire metadata as well
[ok]: COPY does not create an expire if it does not exist
[ok]: COPY basic usage for list
[ok]: XCLAIM without JUSTID increments delivery count
[ok]: Hash fuzzing #1 - 10 fields
[ok]: COPY basic usage for intset set
[ok]: COPY basic usage for hashtable set
[ok]: COPY basic usage for ziplist sorted set
[ok]: COPY basic usage for skiplist sorted set
[ok]: COPY basic usage for ziplist hash
[ok]: Hash fuzzing #2 - 10 fields
[ok]: MUTLI / EXEC basics
[ok]: DISCARD
[ok]: Nested MULTI are not allowed
[ok]: MULTI where commands alter argc/argv
[ok]: WATCH inside MULTI is not allowed
[ok]: EXEC fails if there are errors while queueing commands #1
[ok]: COPY basic usage for hashtable hash
[ok]: EXEC fails if there are errors while queueing commands #2
[ok]: If EXEC aborts, the client MULTI state is cleared
[ok]: EXEC works on WATCHed key not modified
[ok]: EXEC fail on WATCHed key modified (1 key of 1 watched)
[ok]: EXEC fail on WATCHed key modified (1 key of 5 watched)
[ok]: EXEC fail on WATCHed key modified by SORT with STORE even if the result is empty
[ok]: After successful EXEC key is no longer watched
[ok]: After failed EXEC key is no longer watched
[ok]: It is possible to UNWATCH
[ok]: UNWATCH when there is nothing watched works as expected
[ok]: FLUSHALL is able to touch the watched keys
[ok]: FLUSHALL does not touch non affected keys
[ok]: FLUSHDB is able to touch the watched keys
[ok]: FLUSHDB does not touch non affected keys
[ok]: WATCH is able to remember the DB a key belongs to
[ok]: WATCH will consider touched keys target of EXPIRE
[ok]: BRPOPLPUSH timeout
[ok]: BLPOP when new key is moved into place
[ok]: BLPOP when result key is created by SORT..STORE
[ok]: BLPOP: with single empty list argument
[ok]: BLPOP: with negative timeout
[ok]: XCLAIM same consumer
[ok]: XINFO FULL output
[ok]: XGROUP CREATECONSUMER: create consumer if does not exist
[ok]: XGROUP CREATECONSUMER: group must exist
[ok]: BLPOP: with non-integer timeout
[ok]: XREADGROUP with NOACK creates consumer
[ok]: COPY basic usage for stream
[ok]: COPY basic usage for stream-cgroups
[ok]: MOVE basic usage
[ok]: MOVE against key existing in the target DB
[ok]: MOVE against non-integer DB (#1428)
[ok]: MOVE can move key expire metadata as well
[ok]: MOVE does not create an expire if it does not exist
[ok]: SET/GET keys in different DBs
[ok]: RANDOMKEY
[ok]: RANDOMKEY against empty DB
[ok]: RANDOMKEY regression 1
[ok]: KEYS * two times with long key, Github issue #1208
[5/61 done]: unit/keyspace (2 seconds)
Testing unit/quit
[ok]: QUIT returns OK
[ok]: Pipelined commands after QUIT must not be executed
[ok]: Pipelined commands after QUIT that exceed read buffer size
[6/61 done]: unit/quit (0 seconds)
Testing unit/aofrw
[ok]: FUZZ stresser with data model binary
[ok]: BLPOP: with zero timeout should block indefinitely
[ok]: BLPOP: second argument is not a list
[ok]: WATCH will consider touched expired keys
[ok]: DISCARD should clear the WATCH dirty flag on the client
[ok]: DISCARD should UNWATCH all the keys
[ok]: EXPIRE - After 2.1 seconds the key should no longer be here
[ok]: EXPIRE - write on expire should work
[ok]: EXPIREAT - Check for EXPIRE alike behavior
[ok]: SETEX - Set + Expire combo operation. Check for TTL
[ok]: SETEX - Check value
[ok]: SETEX - Overwrite old key
[ok]: MULTI / EXEC is propagated correctly (single write command)
[ok]: MULTI / EXEC is propagated correctly (empty transaction)
[ok]: Consumer without PEL is present in AOF after AOFRW
[ok]: MULTI / EXEC is propagated correctly (read-only commands)
[ok]: MULTI / EXEC is propagated correctly (write command, no effect)
[ok]: DISCARD should not fail during OOM
[ok]: Consumer group last ID propagation to slave (NOACK=0)
[ok]: Consumer group last ID propagation to slave (NOACK=1)
[ok]: MULTI and script timeout
[ok]: BLPOP: timeout
[ok]: BLPOP: arguments are empty
[ok]: BRPOP: with single empty list argument
[ok]: BRPOP: with negative timeout
[ok]: BRPOP: with non-integer timeout
[ok]: FUZZ stresser with data model alpha
[ok]: SETEX - Wait for the key to expire
[ok]: SETEX - Wrong time parameter
[ok]: PERSIST can undo an EXPIRE
[ok]: PERSIST returns 0 against non existing or non volatile keys
[ok]: EXEC and script timeout
[ok]: MULTI-EXEC body and script timeout
[ok]: Empty stream with no lastid can be rewrite into AOF correctly
[ok]: just EXEC and script timeout
[ok]: exec with write commands and state change
[ok]: exec with read commands and stale replica state change
[ok]: EXEC with only read commands should not be rejected when OOM
[ok]: EXEC with at least one use-memory command should fail
[ok]: Blocking commands ignores the timeout
[ok]: BRPOP: with zero timeout should block indefinitely
[ok]: BRPOP: second argument is not a list
[7/61 done]: unit/multi (4 seconds)
Testing unit/acl
[8/61 done]: unit/type/stream-cgroups (5 seconds)
Testing unit/latency-monitor
[ok]: Connections start with the default user
[ok]: It is possible to create new users
[ok]: New users start disabled
[ok]: Enabling the user allows the login
[ok]: Only the set of correct passwords work
[ok]: It is possible to remove passwords from the set of valid ones
[ok]: Test password hashes can be added
[ok]: Test password hashes validate input
[ok]: ACL GETUSER returns the password hash instead of the actual password
[ok]: Test hashed passwords removal
[ok]: By default users are not able to access any command
[ok]: By default users are not able to access any key
[ok]: It's possible to allow the access of a subset of keys
[ok]: By default users are able to publish to any channel
[ok]: By default users are able to subscribe to any channel
[ok]: By default users are able to subscribe to any pattern
[ok]: It's possible to allow publishing to a subset of channels
[ok]: It's possible to allow subscribing to a subset of channels
[ok]: It's possible to allow subscribing to a subset of channel patterns
[ok]: Subscribers are killed when revoked of channel permission
[ok]: Subscribers are killed when revoked of pattern permission
[ok]: Subscribers are pardoned if literal permissions are retained and/or gaining allchannels
[ok]: Users can be configured to authenticate with any password
[ok]: ACLs can exclude single commands
[ok]: ACLs can include or exclude whole classes of commands
[ok]: ACLs can include single subcommands
[ok]: ACL GETUSER is able to translate back command permissions
[ok]: ACL GETUSER provides reasonable results
[ok]: ACL #5998 regression: memory leaks adding / removing subcommands
[ok]: ACL LOG shows failed command executions at toplevel
[ok]: ACL LOG is able to test similar events
[ok]: ACL LOG is able to log keys access violations and key name
[ok]: ACL LOG is able to log channel access violations and channel name
[ok]: ACL LOG RESET is able to flush the entries in the log
[ok]: ACL LOG can distinguish the transaction context (1)
[ok]: ACL LOG can distinguish the transaction context (2)
[ok]: ACL can log errors in the context of Lua scripting
[ok]: ACL LOG can accept a numerical argument to show less entries
[ok]: ACL LOG can log failed auth attempts
[ok]: ACL LOG entries are limited to a maximum amount
[ok]: When default user is off, new connections are not authenticated
[ok]: ACL HELP should not have unexpected options
[ok]: Delete a user that the client doesn't use
[ok]: Delete a user that the client is using
[ok]: Alice: can excute all command
[ok]: Bob: just excute @set and acl command
[ok]: ACL load and save
[9/61 done]: unit/acl (1 seconds)
Testing integration/block-repl
[ok]: BRPOP: timeout
[ok]: EXPIRE precision is now the millisecond
[ok]: BRPOP: arguments are empty
[ok]: BLPOP inside a transaction
[ok]: LPUSHX, RPUSHX - generic
[ok]: LPUSHX, RPUSHX - linkedlist
[ok]: LINSERT - linkedlist
[ok]: LPUSHX, RPUSHX - ziplist
[ok]: LINSERT - ziplist
[ok]: LINSERT raise error on bad syntax
[ok]: Hash fuzzing #1 - 512 fields
[ok]: LINDEX consistency test - quicklist
[ok]: FUZZ stresser with data model compr
[ok]: LINDEX random access - quicklist
[ok]: PEXPIRE/PSETEX/PEXPIREAT can set sub-second expires
[ok]: TTL returns time to live in seconds
[ok]: PTTL returns time to live in milliseconds
[ok]: TTL / PTTL return -1 if key has no expire
[ok]: TTL / PTTL return -2 if key does not exit
[ok]: Check if list is still ok after a DEBUG RELOAD - quicklist
[ok]: XADD mass insertion and XLEN
[ok]: XADD with ID 0-0
[ok]: XRANGE COUNT works as expected
[ok]: XREVRANGE COUNT works as expected
[ok]: First server should have role slave after SLAVEOF
[ok]: LINDEX consistency test - quicklist
[ok]: LINDEX random access - quicklist
[ok]: BGSAVE
[ok]: SELECT an out of range DB
[ok]: Redis should actively expire keys incrementally
[ok]: Very big payload random access
[ok]: Check if list is still ok after a DEBUG RELOAD - quicklist
[ok]: LLEN against non-list value error
[ok]: LLEN against non existing key
[ok]: LINDEX against non-list value error
[ok]: LINDEX against non existing key
[ok]: LPUSH against non-list value error
[ok]: RPUSH against non-list value error
[ok]: RPOPLPUSH base case - linkedlist
[ok]: LMOVE left left base case - linkedlist
[ok]: LMOVE left right base case - linkedlist
[ok]: LMOVE right left base case - linkedlist
[ok]: LMOVE right right base case - linkedlist
[ok]: RPOPLPUSH with the same list as src and dst - linkedlist
[ok]: LMOVE left left with the same list as src and dst - linkedlist
[ok]: LMOVE left right with the same list as src and dst - linkedlist
[ok]: LMOVE right left with the same list as src and dst - linkedlist
[ok]: LMOVE right right with the same list as src and dst - linkedlist
[ok]: RPOPLPUSH with linkedlist source and existing target linkedlist
[ok]: LMOVE left left with linkedlist source and existing target linkedlist
[ok]: LMOVE left right with linkedlist source and existing target linkedlist
[ok]: LMOVE right left with linkedlist source and existing target linkedlist
[ok]: LMOVE right right with linkedlist source and existing target linkedlist
[ok]: RPOPLPUSH with linkedlist source and existing target ziplist
[ok]: LMOVE left left with linkedlist source and existing target ziplist
[ok]: LMOVE left right with linkedlist source and existing target ziplist
[ok]: LMOVE right left with linkedlist source and existing target ziplist
[ok]: LMOVE right right with linkedlist source and existing target ziplist
[ok]: RPOPLPUSH base case - ziplist
[ok]: LMOVE left left base case - ziplist
[ok]: LMOVE left right base case - ziplist
[ok]: LMOVE right left base case - ziplist
[ok]: LMOVE right right base case - ziplist
[ok]: RPOPLPUSH with the same list as src and dst - ziplist
[ok]: LMOVE left left with the same list as src and dst - ziplist
[ok]: LMOVE left right with the same list as src and dst - ziplist
[ok]: LMOVE right left with the same list as src and dst - ziplist
[ok]: LMOVE right right with the same list as src and dst - ziplist
[ok]: RPOPLPUSH with ziplist source and existing target linkedlist
[ok]: LMOVE left left with ziplist source and existing target linkedlist
[ok]: LMOVE left right with ziplist source and existing target linkedlist
[ok]: LMOVE right left with ziplist source and existing target linkedlist
[ok]: LMOVE right right with ziplist source and existing target linkedlist
[ok]: RPOPLPUSH with ziplist source and existing target ziplist
[ok]: LMOVE left left with ziplist source and existing target ziplist
[ok]: LMOVE left right with ziplist source and existing target ziplist
[ok]: LMOVE right left with ziplist source and existing target ziplist
[ok]: LMOVE right right with ziplist source and existing target ziplist
[ok]: RPOPLPUSH against non existing key
[ok]: RPOPLPUSH against non list src key
[ok]: RPOPLPUSH against non list dst key
[ok]: RPOPLPUSH against non existing src key
[ok]: Basic LPOP/RPOP - linkedlist
[ok]: Basic LPOP/RPOP - ziplist
[ok]: LPOP/RPOP against non list value
[ok]: Mass RPOP/LPOP - quicklist
[ok]: EXPIRES after a reload (snapshot + append only file rewrite)
[ok]: Test latency events logging
[ok]: LATENCY HISTORY output is ok
[ok]: LATENCY LATEST output is ok
[ok]: LATENCY HISTORY / RESET with wrong event name is fine
[ok]: LATENCY DOCTOR produces some output
[ok]: LATENCY RESET is able to reset events
[ok]: Mass RPOP/LPOP - quicklist
[ok]: LRANGE basics - linkedlist
[ok]: LRANGE inverted indexes - linkedlist
[ok]: LRANGE out of range indexes including the full list - linkedlist
[ok]: LRANGE out of range negative end index - linkedlist
[ok]: LRANGE basics - ziplist
[ok]: Redis should lazy expire keys
[ok]: LRANGE inverted indexes - ziplist
[ok]: LRANGE out of range indexes including the full list - ziplist
[ok]: LRANGE out of range negative end index - ziplist
[ok]: LRANGE against non existing key
[ok]: LTRIM basics - linkedlist
[ok]: LTRIM out of range negative end index - linkedlist
[ok]: LTRIM basics - ziplist
[ok]: LTRIM out of range negative end index - ziplist
[ok]: LSET - linkedlist
[ok]: LSET out of range index - linkedlist
[ok]: LSET - ziplist
[ok]: LSET out of range index - ziplist
[ok]: LSET against non existing key
[ok]: LSET against non list value
[ok]: LREM remove all the occurrences - linkedlist
[ok]: LREM remove the first occurrence - linkedlist
[ok]: LREM remove non existing element - linkedlist
[ok]: LREM starting from tail with negative count - linkedlist
[ok]: LREM starting from tail with negative count (2) - linkedlist
[ok]: LREM deleting objects that may be int encoded - linkedlist
[ok]: LREM remove all the occurrences - ziplist
[ok]: LREM remove the first occurrence - ziplist
[ok]: LREM remove non existing element - ziplist
[ok]: LREM starting from tail with negative count - ziplist
[ok]: LREM starting from tail with negative count (2) - ziplist
[ok]: LREM deleting objects that may be int encoded - ziplist
[ok]: XRANGE can be used to iterate the whole stream
[ok]: Hash fuzzing #2 - 512 fields
[ok]: EXPIRE should not resurrect keys (issue #1026)
[ok]: 5 keys in, 5 keys out
[ok]: EXPIRE with empty string as TTL should report an error
[ok]: Regression for bug 593 - chaining BRPOPLPUSH with other blocking cmds
[ok]: client unblock tests
[ok]: List ziplist of various encodings
[ok]: List ziplist of various encodings - sanitize dump
[10/61 done]: unit/type/list (10 seconds)
Testing integration/replication
[ok]: Slave enters handshake
[ok]: Old Big Linked list: SORT BY key
[ok]: Old Big Linked list: SORT BY key with limit
[ok]: EXPIRES after AOF reload (without rewrite)
[ok]: Old Big Linked list: SORT BY hash field
[ok]: Intset: SORT BY key
[ok]: Intset: SORT BY key with limit
[ok]: Intset: SORT BY hash field
[ok]: SET 10000 numeric keys and access all them in reverse order
[ok]: DBSIZE should be 10000 now
[ok]: SETNX target key missing
[ok]: SETNX target key exists
[ok]: SETNX against not-expired volatile key
[ok]: Hash table: SORT BY key
[ok]: Hash table: SORT BY key with limit
[ok]: Hash table: SORT BY hash field
[ok]: ZDIFF fuzzing
[ok]: Basic ZPOP with a single key - ziplist
[ok]: ZPOP with count - ziplist
[ok]: BZPOP with a single existing sorted set - ziplist
[ok]: BZPOP with multiple existing sorted sets - ziplist
[ok]: BZPOP second sorted set has members - ziplist
[ok]: Check encoding - skiplist
[ok]: ZSET basic ZADD and score update - skiplist
[ok]: ZSET element can't be set to NaN with ZADD - skiplist
[ok]: ZSET element can't be set to NaN with ZINCRBY
[ok]: ZADD with options syntax error with incomplete pair
[ok]: ZADD XX option without key - skiplist
[ok]: ZADD XX existing key - skiplist
[ok]: ZADD XX returns the number of elements actually added
[ok]: ZADD XX updates existing elements score
[ok]: ZADD GT updates existing elements when new scores are greater
[ok]: ZADD LT updates existing elements when new scores are lower
[ok]: ZADD GT XX updates existing elements when new scores are greater and skips new elements
[ok]: ZADD LT XX updates existing elements when new scores are lower and skips new elements
[ok]: ZADD XX and NX are not compatible
[ok]: ZADD NX with non existing key
[ok]: ZADD NX only add new elements without updating old ones
[ok]: ZADD GT and NX are not compatible
[ok]: ZADD LT and NX are not compatible
[ok]: ZADD LT and GT are not compatible
[ok]: ZADD INCR works like ZINCRBY
[ok]: ZADD INCR works with a single score-elemenet pair
[ok]: ZADD CH option changes return value to all changed elements
[ok]: ZINCRBY calls leading to NaN result in error
[ok]: ZADD - Variadic version base case
[ok]: ZADD - Return value is the number of actually added items
[ok]: ZADD - Variadic version does not add nothing on single parsing err
[ok]: ZADD - Variadic version will raise error on missing arg
[ok]: ZINCRBY does not work variadic even if shares ZADD implementation
[ok]: ZCARD basics - skiplist
[ok]: ZREM removes key after last element is removed
[ok]: ZREM variadic version
[ok]: ZREM variadic version -- remove elements after key deletion
[ok]: ZRANGE basics - skiplist
[ok]: ZREVRANGE basics - skiplist
[ok]: ZRANK/ZREVRANK basics - skiplist
[ok]: ZRANK - after deletion - skiplist
[ok]: ZINCRBY - can create a new sorted set - skiplist
[ok]: ZINCRBY - increment and decrement - skiplist
[ok]: ZINCRBY return value
[ok]: ZRANGEBYSCORE/ZREVRANGEBYSCORE/ZCOUNT basics
[ok]: ZRANGEBYSCORE with WITHSCORES
[ok]: ZRANGEBYSCORE with LIMIT
[ok]: ZRANGEBYSCORE with LIMIT and WITHSCORES
[ok]: ZRANGEBYSCORE with non-value min or max
[ok]: ZRANGEBYLEX/ZREVRANGEBYLEX/ZLEXCOUNT basics
[ok]: ZLEXCOUNT advanced
[ok]: ZRANGEBYSLEX with LIMIT
[ok]: ZRANGEBYLEX with invalid lex range specifiers
[ok]: ZREMRANGEBYSCORE basics
[ok]: ZREMRANGEBYSCORE with non-value min or max
[ok]: ZREMRANGEBYRANK basics
[ok]: ZUNIONSTORE against non-existing key doesn't set destination - skiplist
[ok]: ZUNION/ZINTER/ZDIFF against non-existing key - skiplist
[ok]: ZUNIONSTORE with empty set - skiplist
[ok]: ZUNION/ZINTER/ZDIFF with empty set - skiplist
[ok]: ZUNIONSTORE basics - skiplist
[ok]: ZUNION/ZINTER/ZDIFF with integer members - skiplist
[ok]: ZUNIONSTORE with weights - skiplist
[ok]: ZUNION with weights - skiplist
[ok]: ZUNIONSTORE with a regular set and weights - skiplist
[ok]: ZUNIONSTORE with AGGREGATE MIN - skiplist
[ok]: ZUNION/ZINTER with AGGREGATE MIN - skiplist
[ok]: ZUNIONSTORE with AGGREGATE MAX - skiplist
[ok]: ZUNION/ZINTER with AGGREGATE MAX - skiplist
[ok]: ZINTERSTORE basics - skiplist
[ok]: ZINTER basics - skiplist
[ok]: ZINTERSTORE with weights - skiplist
[ok]: ZINTER with weights - skiplist
[ok]: ZINTERSTORE with a regular set and weights - skiplist
[ok]: ZINTERSTORE with AGGREGATE MIN - skiplist
[ok]: ZINTERSTORE with AGGREGATE MAX - skiplist
[ok]: ZUNIONSTORE with +inf/-inf scores - skiplist
[ok]: ZUNIONSTORE with NaN weights skiplist
[ok]: ZINTERSTORE with +inf/-inf scores - skiplist
[ok]: ZINTERSTORE with NaN weights skiplist
[ok]: ZDIFFSTORE basics - skiplist
[ok]: ZDIFF basics - skiplist
[ok]: ZDIFFSTORE with a regular set - skiplist
[ok]: ZDIFF subtracting set from itself - skiplist
[ok]: ZDIFF algorithm 1 - skiplist
[ok]: ZDIFF algorithm 2 - skiplist
[ok]: SET - use EX/PX option, TTL should not be reseted after loadaof
[ok]: SET command will remove expire
[ok]: SET - use KEEPTTL option, TTL should not be removed
[ok]: SDIFF fuzzing
[ok]: SINTER against non-set should throw error
[ok]: SUNION against non-set should throw error
[ok]: SINTER should handle non existing key as empty
[ok]: SINTER with same integer elements but different encoding
[ok]: SINTERSTORE against non existing keys should delete dstkey
[ok]: SUNIONSTORE against non existing keys should delete dstkey
[ok]: SPOP basics - hashtable
[ok]: SPOP with <count>=1 - hashtable
[ok]: SRANDMEMBER - hashtable
[ok]: SPOP basics - intset
[ok]: SPOP with <count>=1 - intset
[ok]: SRANDMEMBER - intset
[ok]: SPOP with <count>
[ok]: SPOP with <count>
[ok]: SPOP using integers, testing Knuth's and Floyd's algorithm
[ok]: SPOP using integers with Knuth's algorithm
[ok]: SPOP new implementation: code path #1
[ok]: SPOP new implementation: code path #2
[ok]: SPOP new implementation: code path #3
[ok]: SRANDMEMBER with <count> against non existing key
[ok]: SRANDMEMBER with <count> - hashtable
[ok]: SRANDMEMBER with <count> - intset
[ok]: SMOVE basics - from regular set to intset
[ok]: SMOVE basics - from intset to regular set
[ok]: SMOVE non existing key
[ok]: SMOVE non existing src set
[ok]: SMOVE from regular set to non existing destination set
[ok]: SMOVE from intset to non existing destination set
[ok]: SMOVE wrong src key type
[ok]: SMOVE wrong dst key type
[ok]: SMOVE with identical source and destination
[ok]: Stress test the hash ziplist -> hashtable encoding conversion
[ok]: XREVRANGE returns the reverse of XRANGE
[ok]: XRANGE exclusive ranges
[ok]: XREAD with non empty stream
[ok]: Non blocking XREAD with empty streams
[ok]: XREAD with non empty second stream
[ok]: Blocking XREAD waiting new data
[ok]: Blocking XREAD waiting old data
[ok]: Hash ziplist of various encodings
[ok]: Hash ziplist of various encodings - sanitize dump
[ok]: Blocking XREAD will not reply with an empty array
[ok]: XREAD: XADD + DEL should not awake client
[ok]: XREAD: XADD + DEL + LPUSH should not awake client
[ok]: XREAD with same stream name multiple times should work
[ok]: XREAD + multiple XADD inside transaction
[ok]: XDEL basic test
[11/61 done]: unit/type/hash (15 seconds)
Testing integration/replication-2
[ok]: First server should have role slave after SLAVEOF
[ok]: If min-slaves-to-write is honored, write is accepted
[ok]: No write if min-slaves-to-write is < attached slaves
[ok]: If min-slaves-to-write is honored, write is accepted (again)
[ok]: SET - use KEEPTTL option, TTL should not be removed after loadaof
[12/61 done]: unit/expire (16 seconds)
Testing integration/replication-3
[ok]: MIGRATE cached connections are released after some time
[ok]: MIGRATE is able to migrate a key between two instances
[ok]: First server should have role slave after SLAVEOF
[ok]: PIPELINING stresser (also a regression for the old epoll bug)
[ok]: APPEND basics
[ok]: APPEND basics, integer encoded values
[ok]: MIGRATE is able to copy a key between two instances
[ok]: SETNX against expired volatile key
[ok]: MGET
[ok]: MGET against non existing key
[ok]: MGET against non-string key
[ok]: GETSET (set new value)
[ok]: GETSET (replace old value)
[ok]: MSET base case
[ok]: MSET wrong number of args
[ok]: MSETNX with already existent key
[ok]: MSETNX with not existing keys
[ok]: STRLEN against non-existing key
[ok]: STRLEN against integer-encoded value
[ok]: STRLEN against plain string
[ok]: SETBIT against non-existing key
[ok]: SETBIT against string-encoded key
[ok]: SETBIT against integer-encoded key
[ok]: SETBIT against key with wrong type
[ok]: SETBIT with out of range bit offset
[ok]: SETBIT with non-bit argument
[ok]: MIGRATE will not overwrite existing keys, unless REPLACE is used
[ok]: MIGRATE propagates TTL correctly
[ok]: APPEND fuzzing
[ok]: SCAN regression test for issue #4906
[13/61 done]: unit/scan (17 seconds)
Testing integration/replication-4
[ok]: FLUSHDB
[ok]: Perform a final SAVE to leave a clean DB on disk
[ok]: RESET clears client state
[ok]: RESET clears MONITOR state
[ok]: RESET clears and discards MULTI state
[ok]: RESET clears Pub/Sub state
[ok]: RESET clears authenticated state
[ok]: SETBIT fuzzing
[ok]: GETBIT against non-existing key
[ok]: GETBIT against string-encoded key
[ok]: GETBIT against integer-encoded key
[ok]: SETRANGE against non-existing key
[ok]: SETRANGE against string-encoded key
[ok]: SETRANGE against integer-encoded key
[ok]: SETRANGE against key with wrong type
[ok]: SETRANGE with out of range offset
[ok]: GETRANGE against non-existing key
[ok]: GETRANGE against string value
[ok]: GETRANGE against integer-encoded value
[ok]: intsets implementation stress testing
[14/61 done]: unit/type/set (18 seconds)
Testing integration/replication-psync
[ok]: Slave should be able to synchronize with the master
[ok]: Don't rehash if redis has child proecess
[ok]: First server should have role slave after SLAVEOF
[15/61 done]: unit/other (18 seconds)
Testing integration/aof
[ok]: Unfinished MULTI: Server should start if load-truncated is yes
[ok]: Short read: Server should start if load-truncated is yes
[ok]: Truncated AOF loaded: we expect foo to be equal to 5
[ok]: Append a new command after loading an incomplete AOF
[ok]: Short read + command: Server should start
[ok]: Truncated AOF loaded: we expect foo to be equal to 6 now
[ok]: Detect write load to master
[ok]: Bad format: Server should have logged an error
[ok]: Test replication partial resync: no reconnection, just sync (diskless: no, disabled, reconnect: 0)
[ok]: Unfinished MULTI: Server should have logged an error
[ok]: Short read: Server should have logged an error
[ok]: Short read: Utility should confirm the AOF is not valid
[ok]: Short read: Utility should be able to fix the AOF
[ok]: Fixed AOF: Server should have been started
[ok]: Fixed AOF: Keyspace should contain values that were parseable
[ok]: Slave is able to detect timeout during handshake
[ok]: AOF+SPOP: Server should have been started
[ok]: AOF+SPOP: Set should have 1 member
[ok]: Slave should be able to synchronize with the master
[ok]: XDEL fuzz test
[ok]: AOF+SPOP: Server should have been started
[ok]: AOF+SPOP: Set should have 1 member
[ok]: Set instance A as slave of B
[ok]: AOF+EXPIRE: Server should have been started
[ok]: AOF+EXPIRE: List should be empty
[ok]: Redis should not try to convert DEL into EXPIREAT for EXPIRE -1
[ok]: No write if min-slaves-max-lag is > of the slave lag
[ok]: min-slaves-to-write is ignored by slaves
[ok]: Detect write load to master
[ok]: GETRANGE fuzzing
[ok]: Extended SET can detect syntax errors
[ok]: Extended SET NX option
[ok]: Extended SET XX option
[ok]: Extended SET GET option
[ok]: Big Hash table: SORT BY key
[ok]: Extended SET GET option with no previous value
[ok]: Extended SET GET with NX option should result in syntax err
[ok]: Extended SET GET with incorrect type should result in wrong type error
[ok]: Extended SET EX option
[ok]: Extended SET PX option
[ok]: Extended SET using multiple options at once
[ok]: GETRANGE with huge ranges, Github issue #1844
[ok]: STRALGO LCS string output with STRINGS option
[ok]: STRALGO LCS len
[ok]: LCS with KEYS option
[ok]: LCS indexes
[ok]: LCS indexes with match len
[ok]: LCS indexes with match len and minimum match len
[ok]: Big Hash table: SORT BY key with limit
[16/61 done]: unit/type/string (22 seconds)
Testing integration/rdb
[ok]: LTRIM stress testing - linkedlist
[ok]: RDB encoding loading test
[ok]: INCRBYFLOAT replication, should not remove expire
[ok]: GETSET replication
[ok]: BRPOPLPUSH replication, when blocking against empty list
[ok]: Server started empty with non-existing RDB file
[ok]: Server started empty with empty RDB file
[ok]: BRPOPLPUSH replication, list exists
[ok]: BLMOVE (left, left) replication, when blocking against empty list
[ok]: Test RDB stream encoding
[ok]: Test RDB stream encoding - sanitize dump
[ok]: LATENCY of expire events are correctly collected
[ok]: LATENCY HELP should not have unexpected options
[17/61 done]: unit/latency-monitor (18 seconds)
Testing integration/corrupt-dump
[ok]: Server should not start if RDB is corrupted
[ok]: Big Hash table: SORT BY hash field
Logged warnings (pid 12861):
[ok]: SORT GET #
[ok]: SORT GET <const>
(none)
[err]: corrupt payload: #7445 - with sanitize in tests/integration/corrupt-dump.tcl
Expected 'ERR DUMP payload version or checksum are wrong' to match '*Bad data format*' (context: type eval line 6 cmd {assert_match "*Bad data format*" $err} proc ::start_server)
[ok]: SORT GET (key and hash) with sanity check
[ok]: SORT BY key STORE
[ok]: SORT BY hash field STORE
[ok]: SORT extracts STORE correctly
[ok]: SORT extracts multiple STORE correctly
[ok]: SORT DESC
[ok]: SORT ALPHA against integer encoded strings
[ok]: SORT sorted set
[ok]: SORT sorted set BY nosort should retain ordering
[ok]: SORT sorted set BY nosort + LIMIT
[ok]: SORT sorted set BY nosort works as expected from scripts
[ok]: SORT sorted set: +inf and -inf handling
[ok]: SORT regression for issue #19, sorting floats
[ok]: Test FLUSHALL aborts bgsave
[ok]: SORT with STORE returns zero if result is empty (github issue 224)
[ok]: SORT with STORE does not create empty lists (github issue 224)
[ok]: SORT with STORE removes key if result is empty (github issue 227)
[ok]: SORT with BY <constant> and STORE should still order output
[ok]: SORT will complain with numerical sorting and bad doubles (1)
[ok]: SORT will complain with numerical sorting and bad doubles (2)
[ok]: SORT BY sub-sorts lexicographically if score is the same
[ok]: SORT GET with pattern ending with just -> does not get hash field
[ok]: SORT by nosort retains native order for lists
[ok]: SORT by nosort plus store retains native order for lists
[ok]: SORT by nosort with limit returns based on original list order
[ok]: bgsave resets the change counter
Logged warnings (pid 12933):
(none)
[exception]: Executing test client: ERR DUMP payload version or checksum are wrong.
ERR DUMP payload version or checksum are wrong
while executing
"[srv $level "client"] {*}$args"
(procedure "r" line 7)
invoked from within
"r restore key 0 $corrupt_payload_7445"
("uplevel" body line 3)
invoked from within
"uplevel 1 $code "
(procedure "start_server" line 3)
invoked from within
"start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {
r config set sanitize-dump-payload no..."
("uplevel" body line 2)
invoked from within
"uplevel 1 $code"
(procedure "test" line 51)
invoked from within
"test {corrupt payload: #7445 - without sanitize - 1} {
start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-en..."
("uplevel" body line 16)
invoked from within
"uplevel 1 $code"
(procedure "tags" line 15)
invoked from within
"tags {"dump" "corruption"} {
set corrupt_payload_7445 "\x0E\x01\x1D\x1D\x00\x00\x00\x16\x00\x00\x00\x03\x00\x00\x04\x43\x43\x43\x43\x06\x04\x42\x42\x..."
(file "tests/integration/corrupt-dump.tcl" line 9)
invoked from within
"source $path"
(procedure "execute_test_file" line 4)
invoked from within
"execute_test_file $data"
(procedure "test_client_main" line 10)
invoked from within
"test_client_main $::test_server_port "
Killing still running Redis server 10398
Killing still running Redis server 10435
Killing still running Redis server 10433
Killing still running Redis server 10442
Killing still running Redis server 10441
Killing still running Redis server 10793
Killing still running Redis server 11103
Killing still running Redis server 11417
Killing still running Redis server 11437
I/O error reading reply
while executing
"$r blpop $k 2"
("uplevel" body line 2)
invoked from within
"uplevel 1 [lindex $args $path]"
(procedure "randpath" line 3)
invoked from within
"randpath {
randpath {
$r rpush $k $v
} {
$r lpush $k $v
}
} {
..."
(procedure "bg_block_op" line 12)
invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
(file "tests/helpers/bg_block_op.tcl" line 54)I/O error reading reply
while executing
"$r blpop $k 2"
("uplevel" body line 2)
invoked from within
"uplevel 1 [lindex $args $path]"
(procedure "randpath" line 3)
invoked from within
"randpath {
randpath {
$r rpush $k $v
} {
$r lpush $k $v
}
} {
..."
(procedure "bg_block_op" line 12)
invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
(file "tests/helpers/bg_block_op.tcl" line 54)
I/O error reading reply
while executing
"$r blpop $k $k2 2"
("uplevel" body line 2)
invoked from within
"uplevel 1 [lindex $args $path]"
(procedure "randpath" line 3)
invoked from within
"randpath {
randpath {
$r rpush $k $v
} {
$r lpush $k $v
}
} {
..."
(procedure "bg_block_op" line 12)
invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
(file "tests/helpers/bg_block_op.tcl" line 54)
Killing still running Redis server 11570
Killing still running Redis server 11590
Killing still running Redis server 11627
Killing still running Redis server 11659
Killing still running Redis server 11853
Killing still running Redis server 11899
Killing still running Redis server 12171
Killing still running Redis server 12195
Killing still running Redis server 12229
Killing still running Redis server 12299
Killing still running Redis server 12339
I/O error reading reply
while executing
"{*}$r type $k"
(procedure "createComplexDataset" line 27)
invoked from within
"createComplexDataset $r $ops"
(procedure "bg_complex_data" line 4)
invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
(file "tests/helpers/bg_complex_data.tcl" line 12)I/O error reading reply
while executing
"{*}$r type $k"
(procedure "createComplexDataset" line 27)
invoked from within
"createComplexDataset $r $ops"
(procedure "bg_complex_data" line 4)
invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
(file "tests/helpers/bg_complex_data.tcl" line 12)
I/O error reading reply
while executing
"{*}$r type $k"
(procedure "createComplexDataset" line 27)
invoked from within
"createComplexDataset $r $ops"
(procedure "bg_complex_data" line 4)
invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
(file "tests/helpers/bg_complex_data.tcl" line 12)
Killing still running Redis server 12448
Killing still running Redis server 12490
Killing still running Redis server 12625
Killing still running Redis server 12839
Killing still running Redis server 13002
make[1]: *** [test] Error 1
make[1]: Leaving directory `/opt/redis-6.2-rc1/src'
make: *** [test] Error 2
Comment From: oranagra
@moria7757 thanks for testing this.
the error you reported now is a different one (new test that was added in 6.2).
while i'm looking at it, can you please try this to test the original problem on 6.2:
./runtest --single unit/memefficiency
Comment From: moria7757
> ./runtest --single unit/memefficiency
it seems the same error again.
./runtest --single unit/memefficiency output:
Cleanup: may take some time... OK
Starting test server at port 21079
[ready]: 26608
[1;37;49mTesting unit/memefficiency[0m
[ready]: 26604
[ready]: 26606
[ready]: 26614
[ready]: 26611
[ready]: 26620
[ready]: 26617
[ready]: 26632
[ready]: 26623
[ready]: 26627
[ready]: 26629
[ready]: 26635
[ready]: 26638
[ready]: 26642
[ready]: 26647
[ready]: 26645
[[0;32;49mok[0m]: Memory efficiency with values in range 32
[[0;32;49mok[0m]: Memory efficiency with values in range 64
[[0;32;49mok[0m]: Memory efficiency with values in range 128
[[0;32;49mok[0m]: Memory efficiency with values in range 1024
[[0;32;49mok[0m]: Memory efficiency with values in range 16384
[1/1 [0;33;49mdone[0m]: unit/memefficiency (4 seconds)
[1;37;49mTesting solo test[0m
# Memory
used_memory:104836456
used_memory_human:99.98M
used_memory_rss:129499136
used_memory_rss_human:123.50M
used_memory_peak:107479088
used_memory_peak_human:102.50M
used_memory_peak_perc:97.54%
used_memory_overhead:18486776
used_memory_startup:836576
used_memory_dataset:86349680
used_memory_dataset_perc:83.03%
allocator_allocated:105002760
allocator_active:112590848
allocator_resident:125370368
total_system_memory:10278404096
total_system_memory_human:9.57G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:115343360
maxmemory_human:110.00M
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.07
allocator_frag_bytes:7588088
allocator_rss_ratio:1.11
allocator_rss_bytes:12779520
rss_overhead_ratio:1.03
rss_overhead_bytes:4128768
mem_fragmentation_ratio:1.24
mem_fragmentation_bytes:24703704
mem_not_counted_for_evict:2554
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:20496
mem_aof_buffer:2560
mem_allocator:jemalloc-5.1.0
active_defrag_running:65
lazyfree_pending_objects:0
lazyfreed_objects:0
___ Begin jemalloc statistics ___
Version: "5.1.0-0-g0"
Build-time option settings
config.cache_oblivious: true
config.debug: false
config.fill: true
config.lazy_lock: false
config.malloc_conf: ""
config.prof: false
config.prof_libgcc: false
config.prof_libunwind: false
config.stats: true
config.utrace: false
config.xmalloc: false
Run-time option settings
opt.abort: false
opt.abort_conf: false
opt.retain: true
opt.dss: "secondary"
opt.narenas: 128
opt.percpu_arena: "disabled"
opt.metadata_thp: "disabled"
opt.background_thread: false (background_thread: true)
opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
opt.muzzy_decay_ms: 10000 (arenas.muzzy_decay_ms: 10000)
opt.junk: "false"
opt.zero: false
opt.tcache: true
opt.lg_tcache_max: 15
opt.thp: "default"
opt.stats_print: false
opt.stats_print_opts: ""
Arenas: 128
Quantum size: 8
Page size: 65536
Maximum thread-cached size class: 229376
Number of bin size classes: 55
Number of thread-cache bin size classes: 55
Number of large size classes: 180
Allocated: 105002760, active: 112590848, metadata: 5235720 (n_thp 0), resident: 125370368, mapped: 153747456, retained: 97910784
Background threads: 2, num_runs: 24, run_interval: 2117401708 ns
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
background_thread 957 0 0 3 0 0 0
ctl 1903 0 0 1 0 0 0
prof 0 0 0 0 0 0 0
Merged arenas stats:
assigned threads: 2
uptime: 54190038681
dss allocation precedence: "N/A"
decaying: time npages sweeps madvises purged
dirty: N/A 116 20 122 913
muzzy: N/A 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 100480776 2419330 1063886 25458774
large: 4521984 42 40 42
total: 105002760 2419372 1063926 25458816
active: 112590848
mapped: 153747456
retained: 97910784
base: 5120000
internal: 115720
metadata_thp: 0
tcache_bytes: 50576
resident: 125370368
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 945 0 0 2 0 0 0
extent_avail 2100 0 0 16 0 0 0
extents_dirty 3341 0 0 45 0 0 0
extents_muzzy 1584 0 0 4 0 0 0
extents_retained 2331 0 0 20 0 0 0
decay_dirty 2654 0 0 49 0 0 0
decay_muzzy 2614 0 0 19 0 0 0
base 2776 0 0 5 0 0 0
tcache_list 947 0 0 3 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2584 500 177 1408304 323 1 8192 1 0.039 103 104 1 0 1235 0 0 106 0 0 0
16 1 10917232 1117097 434770 7551150 682327 167 4096 1 0.997 9462 3221 229 230 5873982 0 0 365514 0 0 0
24 2 8079480 542267 205622 5338655 336645 46 8192 3 0.893 4700 1847 57 81 2914135 0 0 159352 0 0 0
32 3 2080 196 131 5082751 65 1 2048 1 0.031 26 29 1 0 1001 0 0 2 0 0 0
40 4 560 100 86 49 14 1 8192 5 0.001 1 4 1 0 951 0 0 2 0 0 0
48 5 3168 159 93 2808644 66 1 4096 3 0.016 6 6 1 0 958 0 0 2 0 0 0
56 6 1064 617 598 29856 19 1 8192 7 0.002 516 520 1 0 1982 0 0 2 0 0 0
64 7 448 128 121 1374385 7 1 1024 1 0.006 10 14 1 0 970 0 0 2 0 0 0
80 8 320 154 150 112 4 1 4096 5 0.000 36 41 1 0 1023 0 0 2 0 0 0
96 9 9888 200 97 111 103 1 2048 3 0.050 5 5 1 0 956 0 0 2 0 0 0
112 10 112 100 99 6 1 1 4096 7 0.000 1 4 1 0 951 0 0 2 0 0 0
128 11 0 112 112 9 0 0 512 1 1 4 8 4 0 965 0 0 2 0 0 0
160 12 26531520 584861 419039 822919 165822 81 2048 5 0.999 4666 3673 226 123 1612366 0 0 246014 0 0 0
192 13 768 109 105 4 4 1 1024 3 0.003 3 6 1 0 955 0 0 2 0 0 0
224 14 224 100 99 2 1 1 2048 7 0.000 1 4 1 0 951 0 0 2 0 0 0
256 15 0 100 100 5 0 0 256 1 1 1 4 1 0 952 0 0 2 0 0 0
320 16 54405120 170326 310 170017 170016 167 1024 5 0.994 1785 72 167 0 1393499 0 0 192 0 0 0
384 17 384 173 172 700076 1 1 512 3 0.001 47 53 1 0 1046 0 0 2 0 0 0
448 18 0 121 121 10 0 0 1024 7 1 7 11 7 0 977 0 0 2 0 0 0
512 19 512 100 99 6 1 1 128 1 0.007 1 4 1 0 951 0 0 2 0 0 0
640 20 0 100 100 1 0 0 512 5 1 1 4 1 0 952 0 0 2 0 0 0
768 21 0 131 131 170031 0 0 256 3 1 19 25 1 0 991 0 0 2 0 0 0
896 22 0 144 144 68 0 0 512 7 1 32 37 32 0 1078 0 0 2 0 0 0
1024 23 1024 79 78 13 1 1 64 1 0.015 7 11 2 0 967 0 0 4 0 0 0
1280 24 1280 151 150 78 1 1 256 5 0.003 46 51 1 0 1043 0 0 2 0 0 0
1536 25 3072 117 115 7 2 1 128 3 0.015 7 11 2 0 966 0 0 2 0 0 0
1792 26 1792 128 127 31 1 1 256 7 0.003 20 24 20 0 1028 0 0 2 0 0 0
2048 27 10240 36 31 6 5 1 32 1 0.156 3 5 1 0 954 0 0 2 0 0 0
2560 28 10240 124 120 78 4 1 128 5 0.031 16 19 14 0 1007 0 0 2 0 0 0
3072 29 0 80 80 77 0 0 64 3 1 12 17 12 0 998 0 0 2 0 0 0
3584 30 0 103 103 2 0 0 128 7 1 2 7 2 0 958 0 0 2 0 0 0
4096 31 4096 39 38 32 1 1 16 1 0.062 20 22 20 0 1026 0 0 2 0 0 0
5120 32 0 0 0 0 0 0 64 5 1 0 0 0 0 945 0 0 2 0 0 0
---
6144 33 0 96 96 76 0 0 32 3 1 44 48 44 0 1125 0 0 2 0 0 0
7168 34 0 114 114 76 0 0 64 7 1 46 51 46 0 1134 0 0 2 0 0 0
8192 35 0 12 12 3 0 0 8 1 1 3 5 4 0 961 0 0 2 0 0 0
10240 36 10240 42 41 10 1 1 32 5 0.031 7 10 7 0 975 0 0 2 0 0 0
12288 37 0 0 0 0 0 0 16 3 1 0 0 0 0 945 0 0 2 0 0 0
---
14336 38 0 51 51 25 0 0 32 7 1 14 18 14 0 1005 0 0 2 0 0 0
16384 39 0 54 54 47 0 0 4 1 1 44 46 46 0 1127 0 0 2 0 0 0
20480 40 81920 21 17 7 4 1 16 5 0.250 3 5 1 0 954 0 0 2 0 0 0
24576 41 0 14 14 4 0 0 8 3 1 3 5 4 0 961 0 0 2 0 0 0
28672 42 0 0 0 0 0 0 16 7 1 0 0 0 0 945 0 0 2 0 0 0
---
32768 43 0 22 22 19 0 0 2 1 1 13 15 17 0 1007 0 0 2 0 0 0
40960 44 40960 20 19 963 1 1 8 5 0.125 11 13 2 1 972 0 0 2 0 0 0
49152 45 0 10 10 1 0 0 4 3 1 1 2 3 0 954 0 0 2 0 0 0
57344 46 114688 2 0 2 2 1 8 7 0.250 0 0 1 0 948 0 0 4 0 0 0
65536 47 0 11 11 2 0 0 1 1 1 2 4 11 0 973 0 0 2 0 0 0
81920 48 81920 21 20 11 1 1 4 5 0.250 11 13 3 1 974 0 0 2 0 0 0
98304 49 0 19 19 10 0 0 2 3 1 10 12 14 0 995 0 0 2 0 0 0
114688 50 0 10 10 1 0 0 4 7 1 1 3 3 0 955 0 0 2 0 0 0
131072 51 0 10 10 1 0 0 1 2 1 1 3 10 0 969 0 0 2 0 0 0
163840 52 163840 13 12 3 1 1 2 5 0.500 3 5 6 2 964 0 0 2 0 0 0
196608 53 0 17 17 8 0 0 1 3 1 8 10 17 0 997 0 0 2 0 0 0
229376 54 0 19 19 10 0 0 2 7 1 10 12 14 0 995 0 0 2 0 0 0
large: size ind allocated nmalloc ndalloc nrequests curlextents
262144 55 0 2 2 2 0
327680 56 327680 1 0 1 1
393216 57 0 2 2 2 0
458752 58 0 7 7 7 0
524288 59 0 11 11 11 0
655360 60 0 1 1 1 0
---
917504 62 0 2 2 2 0
1048576 63 0 8 8 8 0
1310720 64 0 2 2 2 0
1572864 65 0 1 1 1 0
---
2097152 67 0 1 1 1 0
2621440 68 0 3 3 3 0
---
4194304 71 4194304 1 0 1 1
---
arenas[0]:
assigned threads: 1
uptime: 54190038681
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 116 20 122 913
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 100480776 2419330 1063886 25458774
large: 4521984 42 40 42
total: 105002760 2419372 1063926 25458816
active: 112590848
mapped: 136970240
retained: 97910784
base: 5086744
internal: 115720
metadata_thp: 0
tcache_bytes: 49232
resident: 125304832
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 477 0 0 1 0 0 0
extent_avail 1632 0 0 15 0 0 0
extents_dirty 2873 0 0 44 0 0 0
extents_muzzy 1116 0 0 3 0 0 0
extents_retained 1863 0 0 19 0 0 0
decay_dirty 2184 0 0 47 0 0 0
decay_muzzy 2144 0 0 17 0 0 0
base 1839 0 0 3 0 0 0
tcache_list 478 0 0 1 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2584 500 177 1408304 323 1 8192 1 0.039 103 104 1 0 767 0 0 105 0 0 0
16 1 10917232 1117097 434770 7551150 682327 167 4096 1 0.997 9462 3221 229 230 5873514 0 0 365513 0 0 0
24 2 8079480 542267 205622 5338655 336645 46 8192 3 0.893 4700 1847 57 81 2913667 0 0 159351 0 0 0
32 3 2080 196 131 5082751 65 1 2048 1 0.031 26 29 1 0 533 0 0 1 0 0 0
40 4 560 100 86 49 14 1 8192 5 0.001 1 4 1 0 483 0 0 1 0 0 0
48 5 3168 159 93 2808644 66 1 4096 3 0.016 6 6 1 0 490 0 0 1 0 0 0
56 6 1064 617 598 29856 19 1 8192 7 0.002 516 520 1 0 1514 0 0 1 0 0 0
64 7 448 128 121 1374385 7 1 1024 1 0.006 10 14 1 0 502 0 0 1 0 0 0
80 8 320 154 150 112 4 1 4096 5 0.000 36 41 1 0 555 0 0 1 0 0 0
96 9 9888 200 97 111 103 1 2048 3 0.050 5 5 1 0 488 0 0 1 0 0 0
112 10 112 100 99 6 1 1 4096 7 0.000 1 4 1 0 483 0 0 1 0 0 0
128 11 0 112 112 9 0 0 512 1 1 4 8 4 0 497 0 0 1 0 0 0
160 12 26531520 584861 419039 822919 165822 81 2048 5 0.999 4666 3673 226 123 1611898 0 0 246013 0 0 0
192 13 768 109 105 4 4 1 1024 3 0.003 3 6 1 0 487 0 0 1 0 0 0
224 14 224 100 99 2 1 1 2048 7 0.000 1 4 1 0 483 0 0 1 0 0 0
256 15 0 100 100 5 0 0 256 1 1 1 4 1 0 484 0 0 1 0 0 0
320 16 54405120 170326 310 170017 170016 167 1024 5 0.994 1785 72 167 0 1393031 0 0 191 0 0 0
384 17 384 173 172 700076 1 1 512 3 0.001 47 53 1 0 578 0 0 1 0 0 0
448 18 0 121 121 10 0 0 1024 7 1 7 11 7 0 509 0 0 1 0 0 0
512 19 512 100 99 6 1 1 128 1 0.007 1 4 1 0 483 0 0 1 0 0 0
640 20 0 100 100 1 0 0 512 5 1 1 4 1 0 484 0 0 1 0 0 0
768 21 0 131 131 170031 0 0 256 3 1 19 25 1 0 523 0 0 1 0 0 0
896 22 0 144 144 68 0 0 512 7 1 32 37 32 0 610 0 0 1 0 0 0
1024 23 1024 79 78 13 1 1 64 1 0.015 7 11 2 0 499 0 0 3 0 0 0
1280 24 1280 151 150 78 1 1 256 5 0.003 46 51 1 0 575 0 0 1 0 0 0
1536 25 3072 117 115 7 2 1 128 3 0.015 7 11 2 0 498 0 0 1 0 0 0
1792 26 1792 128 127 31 1 1 256 7 0.003 20 24 20 0 560 0 0 1 0 0 0
2048 27 10240 36 31 6 5 1 32 1 0.156 3 5 1 0 486 0 0 1 0 0 0
2560 28 10240 124 120 78 4 1 128 5 0.031 16 19 14 0 539 0 0 1 0 0 0
3072 29 0 80 80 77 0 0 64 3 1 12 17 12 0 530 0 0 1 0 0 0
3584 30 0 103 103 2 0 0 128 7 1 2 7 2 0 490 0 0 1 0 0 0
4096 31 4096 39 38 32 1 1 16 1 0.062 20 22 20 0 558 0 0 1 0 0 0
5120 32 0 0 0 0 0 0 64 5 1 0 0 0 0 477 0 0 1 0 0 0
---
6144 33 0 96 96 76 0 0 32 3 1 44 48 44 0 657 0 0 1 0 0 0
7168 34 0 114 114 76 0 0 64 7 1 46 51 46 0 666 0 0 1 0 0 0
8192 35 0 12 12 3 0 0 8 1 1 3 5 4 0 493 0 0 1 0 0 0
10240 36 10240 42 41 10 1 1 32 5 0.031 7 10 7 0 507 0 0 1 0 0 0
12288 37 0 0 0 0 0 0 16 3 1 0 0 0 0 477 0 0 1 0 0 0
---
14336 38 0 51 51 25 0 0 32 7 1 14 18 14 0 537 0 0 1 0 0 0
16384 39 0 54 54 47 0 0 4 1 1 44 46 46 0 659 0 0 1 0 0 0
20480 40 81920 21 17 7 4 1 16 5 0.250 3 5 1 0 486 0 0 1 0 0 0
24576 41 0 14 14 4 0 0 8 3 1 3 5 4 0 493 0 0 1 0 0 0
28672 42 0 0 0 0 0 0 16 7 1 0 0 0 0 477 0 0 1 0 0 0
---
32768 43 0 22 22 19 0 0 2 1 1 13 15 17 0 539 0 0 1 0 0 0
40960 44 40960 20 19 963 1 1 8 5 0.125 11 13 2 1 504 0 0 1 0 0 0
49152 45 0 10 10 1 0 0 4 3 1 1 2 3 0 486 0 0 1 0 0 0
57344 46 114688 2 0 2 2 1 8 7 0.250 0 0 1 0 480 0 0 3 0 0 0
65536 47 0 11 11 2 0 0 1 1 1 2 4 11 0 505 0 0 1 0 0 0
81920 48 81920 21 20 11 1 1 4 5 0.250 11 13 3 1 506 0 0 1 0 0 0
98304 49 0 19 19 10 0 0 2 3 1 10 12 14 0 527 0 0 1 0 0 0
114688 50 0 10 10 1 0 0 4 7 1 1 3 3 0 487 0 0 1 0 0 0
131072 51 0 10 10 1 0 0 1 2 1 1 3 10 0 501 0 0 1 0 0 0
163840 52 163840 13 12 3 1 1 2 5 0.500 3 5 6 2 496 0 0 1 0 0 0
196608 53 0 17 17 8 0 0 1 3 1 8 10 17 0 529 0 0 1 0 0 0
229376 54 0 19 19 10 0 0 2 7 1 10 12 14 0 527 0 0 1 0 0 0
large: size ind allocated nmalloc ndalloc nrequests curlextents
262144 55 0 2 2 2 0
327680 56 327680 1 0 1 1
393216 57 0 2 2 2 0
458752 58 0 7 7 7 0
524288 59 0 11 11 11 0
655360 60 0 1 1 1 0
---
917504 62 0 2 2 2 0
1048576 63 0 8 8 8 0
1310720 64 0 2 2 2 0
1572864 65 0 1 1 1 0
---
2097152 67 0 1 1 1 0
2621440 68 0 3 3 3 0
---
4194304 71 4194304 1 0 1 1
---
arenas[1]:
assigned threads: 1
uptime: 53370038271
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 0 0 0 0
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 0 0 0 0
large: 0 0 0 0
total: 0 0 0 0
active: 0
mapped: 16777216
retained: 0
base: 33256
internal: 0
metadata_thp: 0
tcache_bytes: 1344
resident: 65536
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 468 0 0 1 0 0 0
extent_avail 468 0 0 1 0 0 0
extents_dirty 468 0 0 1 0 0 0
extents_muzzy 468 0 0 1 0 0 0
extents_retained 468 0 0 1 0 0 0
decay_dirty 470 0 0 2 0 0 0
decay_muzzy 470 0 0 2 0 0 0
base 937 0 0 2 0 0 0
tcache_list 469 0 0 2 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 0 0 0 0 0 0 8192 1 1 0 0 0 0 468 0 0 1 0 0 0
16 1 0 0 0 0 0 0 4096 1 1 0 0 0 0 468 0 0 1 0 0 0
24 2 0 0 0 0 0 0 8192 3 1 0 0 0 0 468 0 0 1 0 0 0
32 3 0 0 0 0 0 0 2048 1 1 0 0 0 0 468 0 0 1 0 0 0
40 4 0 0 0 0 0 0 8192 5 1 0 0 0 0 468 0 0 1 0 0 0
48 5 0 0 0 0 0 0 4096 3 1 0 0 0 0 468 0 0 1 0 0 0
56 6 0 0 0 0 0 0 8192 7 1 0 0 0 0 468 0 0 1 0 0 0
64 7 0 0 0 0 0 0 1024 1 1 0 0 0 0 468 0 0 1 0 0 0
80 8 0 0 0 0 0 0 4096 5 1 0 0 0 0 468 0 0 1 0 0 0
96 9 0 0 0 0 0 0 2048 3 1 0 0 0 0 468 0 0 1 0 0 0
112 10 0 0 0 0 0 0 4096 7 1 0 0 0 0 468 0 0 1 0 0 0
128 11 0 0 0 0 0 0 512 1 1 0 0 0 0 468 0 0 1 0 0 0
160 12 0 0 0 0 0 0 2048 5 1 0 0 0 0 468 0 0 1 0 0 0
192 13 0 0 0 0 0 0 1024 3 1 0 0 0 0 468 0 0 1 0 0 0
224 14 0 0 0 0 0 0 2048 7 1 0 0 0 0 468 0 0 1 0 0 0
256 15 0 0 0 0 0 0 256 1 1 0 0 0 0 468 0 0 1 0 0 0
320 16 0 0 0 0 0 0 1024 5 1 0 0 0 0 468 0 0 1 0 0 0
384 17 0 0 0 0 0 0 512 3 1 0 0 0 0 468 0 0 1 0 0 0
448 18 0 0 0 0 0 0 1024 7 1 0 0 0 0 468 0 0 1 0 0 0
512 19 0 0 0 0 0 0 128 1 1 0 0 0 0 468 0 0 1 0 0 0
640 20 0 0 0 0 0 0 512 5 1 0 0 0 0 468 0 0 1 0 0 0
768 21 0 0 0 0 0 0 256 3 1 0 0 0 0 468 0 0 1 0 0 0
896 22 0 0 0 0 0 0 512 7 1 0 0 0 0 468 0 0 1 0 0 0
1024 23 0 0 0 0 0 0 64 1 1 0 0 0 0 468 0 0 1 0 0 0
1280 24 0 0 0 0 0 0 256 5 1 0 0 0 0 468 0 0 1 0 0 0
1536 25 0 0 0 0 0 0 128 3 1 0 0 0 0 468 0 0 1 0 0 0
1792 26 0 0 0 0 0 0 256 7 1 0 0 0 0 468 0 0 1 0 0 0
2048 27 0 0 0 0 0 0 32 1 1 0 0 0 0 468 0 0 1 0 0 0
2560 28 0 0 0 0 0 0 128 5 1 0 0 0 0 468 0 0 1 0 0 0
3072 29 0 0 0 0 0 0 64 3 1 0 0 0 0 468 0 0 1 0 0 0
3584 30 0 0 0 0 0 0 128 7 1 0 0 0 0 468 0 0 1 0 0 0
4096 31 0 0 0 0 0 0 16 1 1 0 0 0 0 468 0 0 1 0 0 0
5120 32 0 0 0 0 0 0 64 5 1 0 0 0 0 468 0 0 1 0 0 0
6144 33 0 0 0 0 0 0 32 3 1 0 0 0 0 468 0 0 1 0 0 0
7168 34 0 0 0 0 0 0 64 7 1 0 0 0 0 468 0 0 1 0 0 0
8192 35 0 0 0 0 0 0 8 1 1 0 0 0 0 468 0 0 1 0 0 0
10240 36 0 0 0 0 0 0 32 5 1 0 0 0 0 468 0 0 1 0 0 0
12288 37 0 0 0 0 0 0 16 3 1 0 0 0 0 468 0 0 1 0 0 0
14336 38 0 0 0 0 0 0 32 7 1 0 0 0 0 468 0 0 1 0 0 0
16384 39 0 0 0 0 0 0 4 1 1 0 0 0 0 468 0 0 1 0 0 0
20480 40 0 0 0 0 0 0 16 5 1 0 0 0 0 468 0 0 1 0 0 0
24576 41 0 0 0 0 0 0 8 3 1 0 0 0 0 468 0 0 1 0 0 0
28672 42 0 0 0 0 0 0 16 7 1 0 0 0 0 468 0 0 1 0 0 0
32768 43 0 0 0 0 0 0 2 1 1 0 0 0 0 468 0 0 1 0 0 0
40960 44 0 0 0 0 0 0 8 5 1 0 0 0 0 468 0 0 1 0 0 0
49152 45 0 0 0 0 0 0 4 3 1 0 0 0 0 468 0 0 1 0 0 0
57344 46 0 0 0 0 0 0 8 7 1 0 0 0 0 468 0 0 1 0 0 0
65536 47 0 0 0 0 0 0 1 1 1 0 0 0 0 468 0 0 1 0 0 0
81920 48 0 0 0 0 0 0 4 5 1 0 0 0 0 468 0 0 1 0 0 0
98304 49 0 0 0 0 0 0 2 3 1 0 0 0 0 468 0 0 1 0 0 0
114688 50 0 0 0 0 0 0 4 7 1 0 0 0 0 468 0 0 1 0 0 0
131072 51 0 0 0 0 0 0 1 2 1 0 0 0 0 468 0 0 1 0 0 0
163840 52 0 0 0 0 0 0 2 5 1 0 0 0 0 468 0 0 1 0 0 0
196608 53 0 0 0 0 0 0 1 3 1 0 0 0 0 468 0 0 1 0 0 0
229376 54 0 0 0 0 0 0 2 7 1 0 0 0 0 468 0 0 1 0 0 0
---
large: size ind allocated nmalloc ndalloc nrequests curlextents
---
--- End jemalloc statistics ---
[[0;31;49merr[0m]: Active defrag in tests/unit/memefficiency.tcl
defrag didn't stop.
# Memory
used_memory:60265528
used_memory_human:57.47M
used_memory_rss:79101952
used_memory_rss_human:75.44M
used_memory_peak:112304240
used_memory_peak_human:107.10M
used_memory_peak_perc:53.66%
used_memory_overhead:15072280
used_memory_startup:836576
used_memory_dataset:45193248
used_memory_dataset_perc:76.05%
allocator_allocated:60398712
allocator_active:68354048
allocator_resident:73531392
total_system_memory:10278404096
total_system_memory_human:9.57G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.13
allocator_frag_bytes:7955336
allocator_rss_ratio:1.08
allocator_rss_bytes:5177344
rss_overhead_ratio:1.08
rss_overhead_bytes:5570560
mem_fragmentation_ratio:1.31
mem_fragmentation_bytes:18877448
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:41000
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:65
lazyfree_pending_objects:0
lazyfreed_objects:0
___ Begin jemalloc statistics ___
Version: "5.1.0-0-g0"
Build-time option settings
config.cache_oblivious: true
config.debug: false
config.fill: true
config.lazy_lock: false
config.malloc_conf: ""
config.prof: false
config.prof_libgcc: false
config.prof_libunwind: false
config.stats: true
config.utrace: false
config.xmalloc: false
Run-time option settings
opt.abort: false
opt.abort_conf: false
opt.retain: true
opt.dss: "secondary"
opt.narenas: 128
opt.percpu_arena: "disabled"
opt.metadata_thp: "disabled"
opt.background_thread: false (background_thread: true)
opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
opt.muzzy_decay_ms: 10000 (arenas.muzzy_decay_ms: 10000)
opt.junk: "false"
opt.zero: false
opt.tcache: true
opt.lg_tcache_max: 15
opt.thp: "default"
opt.stats_print: false
opt.stats_print_opts: ""
Arenas: 128
Quantum size: 8
Page size: 65536
Maximum thread-cached size class: 229376
Number of bin size classes: 55
Number of thread-cache bin size classes: 55
Number of large size classes: 180
Allocated: 60398808, active: 68354048, metadata: 5235720 (n_thp 0), resident: 73531392, mapped: 101908480, retained: 149749760
Background threads: 2, num_runs: 29, run_interval: 3682335482 ns
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
background_thread 2480 0 0 3 0 0 0
ctl 20468 0 0 1 0 0 0
prof 0 0 0 0 0 0 0
Merged arenas stats:
assigned threads: 2
uptime: 140340142061
dss allocation precedence: "N/A"
decaying: time npages sweeps madvises purged
dirty: N/A 0 24 347 2549
muzzy: N/A 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 55876824 5205126 4102815 39918602
large: 4521984 47 45 47
total: 60398808 5205173 4102860 39918649
active: 68354048
mapped: 101908480
retained: 149749760
base: 5120000
internal: 115720
metadata_thp: 0
tcache_bytes: 17552
resident: 73531392
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 2467 0 0 2 0 0 0
extent_avail 4375 0 0 22 0 0 0
extents_dirty 6631 0 0 51 0 0 0
extents_muzzy 3335 0 0 4 0 0 0
extents_retained 4685 0 0 26 0 0 0
decay_dirty 8013 0 0 59 0 0 0
decay_muzzy 7970 0 0 29 0 0 0
base 5820 0 0 5 0 0 0
tcache_list 2469 0 0 3 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2082640 682475 422145 2146924 260330 33 8192 1 0.962 5449 3003 63 33 10233790 0 0 342978 0 0 0
16 1 4203968 1792518 1529770 11690717 262748 66 4096 1 0.971 14594 12929 310 339 15820343 0 0 711292 0 0 0
24 2 7507824 1275625 962799 8357095 312826 47 8192 3 0.812 10540 8087 84 158 15129647 0 0 502674 0 0 0
32 3 66464 2938 861 9268879 2077 2 2048 1 0.507 163 145 2 4 79087 0 0 358 0 0 0
40 4 304080 10436 2834 59650 7602 1 8192 5 0.927 169 88 1 0 291073 0 0 300 0 0 0
48 5 5232 214 105 4411592 109 1 4096 3 0.026 34 14 1 0 2516 0 0 2 0 0 0
56 6 102984 3380 1541 55488 1839 1 8192 7 0.224 636 629 1 0 72775 0 0 210 0 0 0
64 7 448 137 130 1376419 7 1 1024 1 0.006 19 23 1 0 2510 0 0 2 0 0 0
80 8 36640 1034 576 4576 458 1 4096 5 0.111 139 132 1 0 19991 0 0 88 0 0 0
96 9 192864 2936 927 4233 2009 1 2048 3 0.980 117 115 1 0 75052 0 0 354 0 0 0
112 10 250544 2799 562 2249 2237 1 4096 7 0.546 111 114 1 0 87661 0 0 176 0 0 0
128 11 1280 147 137 30 10 1 512 1 0.019 17 22 5 0 2895 0 0 88 0 0 0
160 12 40000480 1257317 1007314 1495331 250003 123 2048 5 0.992 9912 8222 470 252 11470630 0 0 591420 0 0 0
192 13 384 129 127 169 2 1 1024 3 0.001 11 15 1 0 2494 0 0 2 0 0 0
224 14 224 112 111 9 1 1 2048 7 0.000 6 11 1 0 2485 0 0 2 0 0 0
256 15 0 119 119 16 0 0 256 1 1 9 14 9 0 2508 0 0 2 0 0 0
320 16 5120 170330 170314 170024 16 1 1024 5 0.015 1789 1845 167 6 1398690 0 0 194 0 0 0
384 17 384 180 179 700083 1 1 512 3 0.001 54 59 1 0 2581 0 0 2 0 0 0
448 18 0 134 134 18 0 0 1024 7 1 14 19 14 0 2528 0 0 2 0 0 0
512 19 512 113 112 17 1 1 128 1 0.007 9 14 1 0 2491 0 0 2 0 0 0
640 20 640 116 115 9 1 1 512 5 0.001 8 13 3 0 2531 0 0 78 0 0 0
768 21 0 138 138 170038 0 0 256 3 1 26 31 7 0 2538 0 0 2 0 0 0
896 22 0 152 152 76 0 0 512 7 1 40 44 39 0 2629 0 0 2 0 0 0
1024 23 1024 90 89 24 1 1 64 1 0.015 16 21 2 0 2508 0 0 4 0 0 0
1280 24 1280 158 157 85 1 1 256 5 0.003 53 57 1 0 2578 0 0 2 0 0 0
1536 25 4608 131 128 16 3 1 128 3 0.023 15 19 2 0 2542 0 0 78 0 0 0
1792 26 1792 136 135 193 1 1 256 7 0.003 28 31 27 0 2579 0 0 2 0 0 0
2048 27 10240 47 42 173 5 1 32 1 0.156 13 16 1 0 2535 0 0 78 0 0 0
2560 28 17920 139 132 241 7 1 128 5 0.054 23 25 14 0 2656 0 0 86 0 0 0
3072 29 0 82 82 79 0 0 64 3 1 14 18 13 0 2525 0 0 2 0 0 0
3584 30 0 104 104 3 0 0 128 7 1 3 8 3 0 2484 0 0 2 0 0 0
4096 31 4096 47 46 193 1 1 16 1 0.062 26 29 24 0 2569 0 0 2 0 0 0
5120 32 0 68 68 2 0 0 64 5 1 2 6 2 0 2479 0 0 2 0 0 0
6144 33 0 97 97 77 0 0 32 3 1 45 49 45 0 2651 0 0 2 0 0 0
7168 34 0 115 115 77 0 0 64 7 1 47 52 47 0 2660 0 0 2 0 0 0
8192 35 0 17 17 8 0 0 8 1 1 8 9 7 0 2498 0 0 2 0 0 0
10240 36 0 43 43 166 0 0 32 5 1 8 12 8 0 2503 0 0 2 0 0 0
12288 37 0 18 18 2 0 0 16 3 1 2 5 2 0 2478 0 0 2 0 0 0
14336 38 0 51 51 25 0 0 32 7 1 14 18 14 0 2527 0 0 2 0 0 0
16384 39 0 58 58 51 0 0 4 1 1 48 49 48 0 2660 0 0 2 0 0 0
20480 40 81920 24 20 12 4 1 16 5 0.250 5 7 1 0 2480 0 0 2 0 0 0
24576 41 0 14 14 4 0 0 8 3 1 3 5 4 0 2483 0 0 2 0 0 0
28672 42 0 16 16 1 0 0 16 7 1 1 3 1 0 2473 0 0 2 0 0 0
32768 43 0 26 26 23 0 0 2 1 1 17 18 20 1 2542 0 0 2 0 0 0
40960 44 40960 22 21 3446 1 1 8 5 0.125 13 15 2 1 2498 0 0 2 0 0 0
49152 45 0 10 10 1 0 0 4 3 1 1 2 3 0 2476 0 0 2 0 0 0
57344 46 114688 2 0 2 2 1 8 7 0.250 0 0 1 0 2470 0 0 4 0 0 0
65536 47 196608 18 15 8 3 3 1 1 1 8 7 18 0 2629 0 0 182 0 0 0
81920 48 81920 21 20 11 1 1 4 5 0.250 11 13 3 1 2496 0 0 2 0 0 0
98304 49 0 19 19 10 0 0 2 3 1 10 12 14 0 2517 0 0 2 0 0 0
114688 50 0 10 10 1 0 0 4 7 1 1 3 3 0 2477 0 0 2 0 0 0
131072 51 393216 15 12 5 3 3 1 2 1 5 5 15 0 2618 0 0 182 0 0 0
163840 52 163840 13 12 3 1 1 2 5 0.500 3 5 6 2 2486 0 0 2 0 0 0
196608 53 0 17 17 8 0 0 1 3 1 8 10 17 0 2519 0 0 2 0 0 0
229376 54 0 19 19 10 0 0 2 7 1 10 12 14 0 2517 0 0 2 0 0 0
large: size ind allocated nmalloc ndalloc nrequests curlextents
262144 55 0 3 3 3 0
327680 56 327680 1 0 1 1
393216 57 0 2 2 2 0
458752 58 0 7 7 7 0
524288 59 0 12 12 12 0
655360 60 0 1 1 1 0
---
917504 62 0 2 2 2 0
1048576 63 0 9 9 9 0
1310720 64 0 2 2 2 0
1572864 65 0 1 1 1 0
---
2097152 67 0 2 2 2 0
2621440 68 0 3 3 3 0
---
4194304 71 4194304 2 1 2 1
---
arenas[0]:
assigned threads: 1
uptime: 140340142061
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 0 24 347 2549
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 55876824 5205126 4102815 39918602
large: 4521984 47 45 47
total: 60398808 5205173 4102860 39918649
active: 68354048
mapped: 85131264
retained: 149749760
base: 5086744
internal: 115720
metadata_thp: 0
tcache_bytes: 16208
resident: 73465856
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 1238 0 0 1 0 0 0
extent_avail 3146 0 0 21 0 0 0
extents_dirty 5402 0 0 50 0 0 0
extents_muzzy 2106 0 0 3 0 0 0
extents_retained 3456 0 0 25 0 0 0
decay_dirty 6780 0 0 57 0 0 0
decay_muzzy 6737 0 0 27 0 0 0
base 3361 0 0 3 0 0 0
tcache_list 1239 0 0 1 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2082640 682475 422145 2146924 260330 33 8192 1 0.962 5449 3003 63 33 10232561 0 0 342977 0 0 0
16 1 4203968 1792518 1529770 11690717 262748 66 4096 1 0.971 14594 12929 310 339 15819114 0 0 711291 0 0 0
24 2 7507824 1275625 962799 8357095 312826 47 8192 3 0.812 10540 8087 84 158 15128418 0 0 502673 0 0 0
32 3 66464 2938 861 9268879 2077 2 2048 1 0.507 163 145 2 4 77858 0 0 357 0 0 0
40 4 304080 10436 2834 59650 7602 1 8192 5 0.927 169 88 1 0 289844 0 0 299 0 0 0
48 5 5232 214 105 4411592 109 1 4096 3 0.026 34 14 1 0 1287 0 0 1 0 0 0
56 6 102984 3380 1541 55488 1839 1 8192 7 0.224 636 629 1 0 71546 0 0 209 0 0 0
64 7 448 137 130 1376419 7 1 1024 1 0.006 19 23 1 0 1281 0 0 1 0 0 0
80 8 36640 1034 576 4576 458 1 4096 5 0.111 139 132 1 0 18762 0 0 87 0 0 0
96 9 192864 2936 927 4233 2009 1 2048 3 0.980 117 115 1 0 73823 0 0 353 0 0 0
112 10 250544 2799 562 2249 2237 1 4096 7 0.546 111 114 1 0 86432 0 0 175 0 0 0
128 11 1280 147 137 30 10 1 512 1 0.019 17 22 5 0 1666 0 0 87 0 0 0
160 12 40000480 1257317 1007314 1495331 250003 123 2048 5 0.992 9912 8222 470 252 11469401 0 0 591419 0 0 0
192 13 384 129 127 169 2 1 1024 3 0.001 11 15 1 0 1265 0 0 1 0 0 0
224 14 224 112 111 9 1 1 2048 7 0.000 6 11 1 0 1256 0 0 1 0 0 0
256 15 0 119 119 16 0 0 256 1 1 9 14 9 0 1279 0 0 1 0 0 0
320 16 5120 170330 170314 170024 16 1 1024 5 0.015 1789 1845 167 6 1397461 0 0 193 0 0 0
384 17 384 180 179 700083 1 1 512 3 0.001 54 59 1 0 1352 0 0 1 0 0 0
448 18 0 134 134 18 0 0 1024 7 1 14 19 14 0 1299 0 0 1 0 0 0
512 19 512 113 112 17 1 1 128 1 0.007 9 14 1 0 1262 0 0 1 0 0 0
640 20 640 116 115 9 1 1 512 5 0.001 8 13 3 0 1302 0 0 77 0 0 0
768 21 0 138 138 170038 0 0 256 3 1 26 31 7 0 1309 0 0 1 0 0 0
896 22 0 152 152 76 0 0 512 7 1 40 44 39 0 1400 0 0 1 0 0 0
1024 23 1024 90 89 24 1 1 64 1 0.015 16 21 2 0 1279 0 0 3 0 0 0
1280 24 1280 158 157 85 1 1 256 5 0.003 53 57 1 0 1349 0 0 1 0 0 0
1536 25 4608 131 128 16 3 1 128 3 0.023 15 19 2 0 1313 0 0 77 0 0 0
1792 26 1792 136 135 193 1 1 256 7 0.003 28 31 27 0 1350 0 0 1 0 0 0
2048 27 10240 47 42 173 5 1 32 1 0.156 13 16 1 0 1306 0 0 77 0 0 0
2560 28 17920 139 132 241 7 1 128 5 0.054 23 25 14 0 1427 0 0 85 0 0 0
3072 29 0 82 82 79 0 0 64 3 1 14 18 13 0 1296 0 0 1 0 0 0
3584 30 0 104 104 3 0 0 128 7 1 3 8 3 0 1255 0 0 1 0 0 0
4096 31 4096 47 46 193 1 1 16 1 0.062 26 29 24 0 1340 0 0 1 0 0 0
5120 32 0 68 68 2 0 0 64 5 1 2 6 2 0 1250 0 0 1 0 0 0
6144 33 0 97 97 77 0 0 32 3 1 45 49 45 0 1422 0 0 1 0 0 0
7168 34 0 115 115 77 0 0 64 7 1 47 52 47 0 1431 0 0 1 0 0 0
8192 35 0 17 17 8 0 0 8 1 1 8 9 7 0 1269 0 0 1 0 0 0
10240 36 0 43 43 166 0 0 32 5 1 8 12 8 0 1274 0 0 1 0 0 0
12288 37 0 18 18 2 0 0 16 3 1 2 5 2 0 1249 0 0 1 0 0 0
14336 38 0 51 51 25 0 0 32 7 1 14 18 14 0 1298 0 0 1 0 0 0
16384 39 0 58 58 51 0 0 4 1 1 48 49 48 0 1431 0 0 1 0 0 0
20480 40 81920 24 20 12 4 1 16 5 0.250 5 7 1 0 1251 0 0 1 0 0 0
24576 41 0 14 14 4 0 0 8 3 1 3 5 4 0 1254 0 0 1 0 0 0
28672 42 0 16 16 1 0 0 16 7 1 1 3 1 0 1244 0 0 1 0 0 0
32768 43 0 26 26 23 0 0 2 1 1 17 18 20 1 1313 0 0 1 0 0 0
40960 44 40960 22 21 3446 1 1 8 5 0.125 13 15 2 1 1269 0 0 1 0 0 0
49152 45 0 10 10 1 0 0 4 3 1 1 2 3 0 1247 0 0 1 0 0 0
57344 46 114688 2 0 2 2 1 8 7 0.250 0 0 1 0 1241 0 0 3 0 0 0
65536 47 196608 18 15 8 3 3 1 1 1 8 7 18 0 1400 0 0 181 0 0 0
81920 48 81920 21 20 11 1 1 4 5 0.250 11 13 3 1 1267 0 0 1 0 0 0
98304 49 0 19 19 10 0 0 2 3 1 10 12 14 0 1288 0 0 1 0 0 0
114688 50 0 10 10 1 0 0 4 7 1 1 3 3 0 1248 0 0 1 0 0 0
131072 51 393216 15 12 5 3 3 1 2 1 5 5 15 0 1389 0 0 181 0 0 0
163840 52 163840 13 12 3 1 1 2 5 0.500 3 5 6 2 1257 0 0 1 0 0 0
196608 53 0 17 17 8 0 0 1 3 1 8 10 17 0 1290 0 0 1 0 0 0
229376 54 0 19 19 10 0 0 2 7 1 10 12 14 0 1288 0 0 1 0 0 0
large: size ind allocated nmalloc ndalloc nrequests curlextents
262144 55 0 3 3 3 0
327680 56 327680 1 0 1 1
393216 57 0 2 2 2 0
458752 58 0 7 7 7 0
524288 59 0 12 12 12 0
655360 60 0 1 1 1 0
---
917504 62 0 2 2 2 0
1048576 63 0 9 9 9 0
1310720 64 0 2 2 2 0
1572864 65 0 1 1 1 0
---
2097152 67 0 2 2 2 0
2621440 68 0 3 3 3 0
---
4194304 71 4194304 2 1 2 1
---
arenas[1]:
assigned threads: 1
uptime: 139520141651
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 0 0 0 0
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 0 0 0 0
large: 0 0 0 0
total: 0 0 0 0
active: 0
mapped: 16777216
retained: 0
base: 33256
internal: 0
metadata_thp: 0
tcache_bytes: 1344
resident: 65536
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 1229 0 0 1 0 0 0
extent_avail 1229 0 0 1 0 0 0
extents_dirty 1229 0 0 1 0 0 0
extents_muzzy 1229 0 0 1 0 0 0
extents_retained 1229 0 0 1 0 0 0
decay_dirty 1233 0 0 2 0 0 0
decay_muzzy 1233 0 0 2 0 0 0
base 2459 0 0 2 0 0 0
tcache_list 1230 0 0 2 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 0 0 0 0 0 0 8192 1 1 0 0 0 0 1229 0 0 1 0 0 0
16 1 0 0 0 0 0 0 4096 1 1 0 0 0 0 1229 0 0 1 0 0 0
24 2 0 0 0 0 0 0 8192 3 1 0 0 0 0 1229 0 0 1 0 0 0
32 3 0 0 0 0 0 0 2048 1 1 0 0 0 0 1229 0 0 1 0 0 0
40 4 0 0 0 0 0 0 8192 5 1 0 0 0 0 1229 0 0 1 0 0 0
48 5 0 0 0 0 0 0 4096 3 1 0 0 0 0 1229 0 0 1 0 0 0
56 6 0 0 0 0 0 0 8192 7 1 0 0 0 0 1229 0 0 1 0 0 0
64 7 0 0 0 0 0 0 1024 1 1 0 0 0 0 1229 0 0 1 0 0 0
80 8 0 0 0 0 0 0 4096 5 1 0 0 0 0 1229 0 0 1 0 0 0
96 9 0 0 0 0 0 0 2048 3 1 0 0 0 0 1229 0 0 1 0 0 0
112 10 0 0 0 0 0 0 4096 7 1 0 0 0 0 1229 0 0 1 0 0 0
128 11 0 0 0 0 0 0 512 1 1 0 0 0 0 1229 0 0 1 0 0 0
160 12 0 0 0 0 0 0 2048 5 1 0 0 0 0 1229 0 0 1 0 0 0
192 13 0 0 0 0 0 0 1024 3 1 0 0 0 0 1229 0 0 1 0 0 0
224 14 0 0 0 0 0 0 2048 7 1 0 0 0 0 1229 0 0 1 0 0 0
256 15 0 0 0 0 0 0 256 1 1 0 0 0 0 1229 0 0 1 0 0 0
320 16 0 0 0 0 0 0 1024 5 1 0 0 0 0 1229 0 0 1 0 0 0
384 17 0 0 0 0 0 0 512 3 1 0 0 0 0 1229 0 0 1 0 0 0
448 18 0 0 0 0 0 0 1024 7 1 0 0 0 0 1229 0 0 1 0 0 0
512 19 0 0 0 0 0 0 128 1 1 0 0 0 0 1229 0 0 1 0 0 0
640 20 0 0 0 0 0 0 512 5 1 0 0 0 0 1229 0 0 1 0 0 0
768 21 0 0 0 0 0 0 256 3 1 0 0 0 0 1229 0 0 1 0 0 0
896 22 0 0 0 0 0 0 512 7 1 0 0 0 0 1229 0 0 1 0 0 0
1024 23 0 0 0 0 0 0 64 1 1 0 0 0 0 1229 0 0 1 0 0 0
1280 24 0 0 0 0 0 0 256 5 1 0 0 0 0 1229 0 0 1 0 0 0
1536 25 0 0 0 0 0 0 128 3 1 0 0 0 0 1229 0 0 1 0 0 0
1792 26 0 0 0 0 0 0 256 7 1 0 0 0 0 1229 0 0 1 0 0 0
2048 27 0 0 0 0 0 0 32 1 1 0 0 0 0 1229 0 0 1 0 0 0
2560 28 0 0 0 0 0 0 128 5 1 0 0 0 0 1229 0 0 1 0 0 0
3072 29 0 0 0 0 0 0 64 3 1 0 0 0 0 1229 0 0 1 0 0 0
3584 30 0 0 0 0 0 0 128 7 1 0 0 0 0 1229 0 0 1 0 0 0
4096 31 0 0 0 0 0 0 16 1 1 0 0 0 0 1229 0 0 1 0 0 0
5120 32 0 0 0 0 0 0 64 5 1 0 0 0 0 1229 0 0 1 0 0 0
6144 33 0 0 0 0 0 0 32 3 1 0 0 0 0 1229 0 0 1 0 0 0
7168 34 0 0 0 0 0 0 64 7 1 0 0 0 0 1229 0 0 1 0 0 0
8192 35 0 0 0 0 0 0 8 1 1 0 0 0 0 1229 0 0 1 0 0 0
10240 36 0 0 0 0 0 0 32 5 1 0 0 0 0 1229 0 0 1 0 0 0
12288 37 0 0 0 0 0 0 16 3 1 0 0 0 0 1229 0 0 1 0 0 0
14336 38 0 0 0 0 0 0 32 7 1 0 0 0 0 1229 0 0 1 0 0 0
16384 39 0 0 0 0 0 0 4 1 1 0 0 0 0 1229 0 0 1 0 0 0
20480 40 0 0 0 0 0 0 16 5 1 0 0 0 0 1229 0 0 1 0 0 0
24576 41 0 0 0 0 0 0 8 3 1 0 0 0 0 1229 0 0 1 0 0 0
28672 42 0 0 0 0 0 0 16 7 1 0 0 0 0 1229 0 0 1 0 0 0
32768 43 0 0 0 0 0 0 2 1 1 0 0 0 0 1229 0 0 1 0 0 0
40960 44 0 0 0 0 0 0 8 5 1 0 0 0 0 1229 0 0 1 0 0 0
49152 45 0 0 0 0 0 0 4 3 1 0 0 0 0 1229 0 0 1 0 0 0
57344 46 0 0 0 0 0 0 8 7 1 0 0 0 0 1229 0 0 1 0 0 0
65536 47 0 0 0 0 0 0 1 1 1 0 0 0 0 1229 0 0 1 0 0 0
81920 48 0 0 0 0 0 0 4 5 1 0 0 0 0 1229 0 0 1 0 0 0
98304 49 0 0 0 0 0 0 2 3 1 0 0 0 0 1229 0 0 1 0 0 0
114688 50 0 0 0 0 0 0 4 7 1 0 0 0 0 1229 0 0 1 0 0 0
131072 51 0 0 0 0 0 0 1 2 1 0 0 0 0 1229 0 0 1 0 0 0
163840 52 0 0 0 0 0 0 2 5 1 0 0 0 0 1229 0 0 1 0 0 0
196608 53 0 0 0 0 0 0 1 3 1 0 0 0 0 1229 0 0 1 0 0 0
229376 54 0 0 0 0 0 0 2 7 1 0 0 0 0 1229 0 0 1 0 0 0
---
large: size ind allocated nmalloc ndalloc nrequests curlextents
---
--- End jemalloc statistics ---
[[0;31;49merr[0m]: Active defrag big keys in tests/unit/memefficiency.tcl
defrag didn't stop.
# Memory
used_memory:68157408
used_memory_human:65.00M
used_memory_rss:87883776
used_memory_rss_human:83.81M
used_memory_peak:137789504
used_memory_peak_human:131.41M
used_memory_peak_perc:49.46%
used_memory_overhead:898160
used_memory_startup:836576
used_memory_dataset:67259248
used_memory_dataset_perc:99.91%
allocator_allocated:68285144
allocator_active:80347136
allocator_resident:85524480
total_system_memory:10278404096
total_system_memory_human:9.57G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.18
allocator_frag_bytes:12061992
allocator_rss_ratio:1.06
allocator_rss_bytes:5177344
rss_overhead_ratio:1.03
rss_overhead_bytes:2359296
mem_fragmentation_ratio:1.29
mem_fragmentation_bytes:19767440
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:61512
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:66
lazyfree_pending_objects:0
lazyfreed_objects:0
# Stats
total_connections_received:1
total_commands_processed:1000531
instantaneous_ops_per_sec:8
total_net_input_bytes:137008032
total_net_output_bytes:10842787
instantaneous_input_kbps:0.13
instantaneous_output_kbps:36.59
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:3
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
total_forks:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:99786
active_defrag_misses:31964068
active_defrag_key_hits:14
active_defrag_key_misses:1754
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
dump_payload_sanitizations:0
total_reads_processed:302622
total_writes_processed:300383
io_threaded_reads_processed:0
io_threaded_writes_processed:0
___ Begin jemalloc statistics ___
Version: "5.1.0-0-g0"
Build-time option settings
config.cache_oblivious: true
config.debug: false
config.fill: true
config.lazy_lock: false
config.malloc_conf: ""
config.prof: false
config.prof_libgcc: false
config.prof_libunwind: false
config.stats: true
config.utrace: false
config.xmalloc: false
Run-time option settings
opt.abort: false
opt.abort_conf: false
opt.retain: true
opt.dss: "secondary"
opt.narenas: 128
opt.percpu_arena: "disabled"
opt.metadata_thp: "disabled"
opt.background_thread: false (background_thread: true)
opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
opt.muzzy_decay_ms: 10000 (arenas.muzzy_decay_ms: 10000)
opt.junk: "false"
opt.zero: false
opt.tcache: true
opt.lg_tcache_max: 15
opt.thp: "default"
opt.stats_print: false
opt.stats_print_opts: ""
Arenas: 128
Quantum size: 8
Page size: 65536
Maximum thread-cached size class: 229376
Number of bin size classes: 55
Number of thread-cache bin size classes: 55
Number of large size classes: 180
Allocated: 68297032, active: 80347136, metadata: 5235720 (n_thp 0), resident: 85524480, mapped: 113901568, retained: 137756672
Background threads: 2, num_runs: 43, run_interval: 4659109976 ns
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
background_thread 4791 0 0 3 0 0 0
ctl 40609 0 0 1 0 0 0
prof 0 0 0 0 0 0 0
Merged arenas stats:
assigned threads: 2
uptime: 231890217422
dss allocation precedence: "N/A"
decaying: time npages sweeps madvises purged
dirty: N/A 0 38 477 4035
muzzy: N/A 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 67969352 5762787 5550368 56729847
large: 327680 47 46 47
total: 68297032 5762834 5550414 56729894
active: 80347136
mapped: 113901568
retained: 137756672
base: 5120000
internal: 115720
metadata_thp: 0
tcache_bytes: 23896
resident: 85524480
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 4777 0 0 2 0 0 0
extent_avail 7417 0 0 28 0 0 0
extents_dirty 10577 0 0 77 0 0 0
extents_muzzy 6017 0 0 4 0 0 0
extents_retained 7879 0 0 38 0 0 0
decay_dirty 11902 0 0 89 0 0 0
decay_muzzy 11839 0 0 49 0 0 0
base 10440 0 0 5 0 0 0
tcache_list 4779 0 0 3 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2600 684052 683727 2150469 325 5 8192 1 0.007 7026 7229 63 36 10241931 0 0 342978 0 0 0
16 1 171296 1792524 1781818 17391053 10706 9 4096 1 0.290 14596 15507 310 344 15825616 0 0 711616 0 0 0
24 2 20136 1275992 1275153 9658803 839 24 8192 3 0.004 10671 11385 84 160 15135570 0 0 502996 0 0 0
32 3 3202336 280502 180429 14518792 100073 49 2048 1 0.997 3198 2268 108 69 16218051 0 0 100726 0 0 0
40 4 560 10438 10424 59653 14 1 8192 5 0.001 171 171 1 0 293469 0 0 302 0 0 0
48 5 12864 380 112 6412636 268 1 4096 3 0.065 66 18 1 0 4862 0 0 2 0 0 0
56 6 1400 3422 3397 55530 25 1 8192 7 0.003 676 692 1 0 75348 0 0 530 0 0 0
64 7 512 141 133 1376421 8 1 1024 1 0.007 21 24 1 0 4823 0 0 2 0 0 0
80 8 320 1077 1073 4619 4 1 4096 5 0.000 180 181 1 0 22391 0 0 88 0 0 0
96 9 10272 2938 2831 4234 107 1 2048 3 0.052 119 146 1 0 77395 0 0 354 0 0 0
112 10 112 2801 2800 1502249 1 1 4096 7 0.000 113 148 1 0 90007 0 0 176 0 0 0
128 11 0 149 149 200030 0 0 512 1 1 19 27 6 0 5215 0 0 88 0 0 0
160 12 160 1257317 1257316 1495331 1 1 2048 5 0.000 9912 10777 470 257 11475617 0 0 591420 0 0 0
192 13 384 131 129 674 2 1 1024 3 0.001 13 17 1 0 4808 0 0 2 0 0 0
224 14 224 114 113 200009 1 1 2048 7 0.000 8 12 1 0 4798 0 0 2 0 0 0
256 15 0 119 119 16 0 0 256 1 1 9 14 9 0 4818 0 0 2 0 0 0
320 16 5120 170332 170316 370024 16 1 1024 5 0.015 1791 1846 167 6 1401003 0 0 194 0 0 0
384 17 384 180 179 700083 1 1 512 3 0.001 54 59 1 0 4891 0 0 2 0 0 0
448 18 0 137 137 200019 0 0 1024 7 1 17 21 16 0 4847 0 0 2 0 0 0
512 19 512 113 112 17 1 1 128 1 0.007 9 14 1 0 4801 0 0 2 0 0 0
640 20 64000000 277716 177716 249930 100000 196 512 5 0.996 3046 2107 448 470 16142115 0 0 100562 0 0 0
768 21 0 138 138 170038 0 0 256 3 1 26 31 7 0 4848 0 0 2 0 0 0
896 22 0 153 153 77 0 0 512 7 1 41 45 40 0 4943 0 0 2 0 0 0
1024 23 1024 90 89 24 1 1 64 1 0.015 16 21 2 0 4818 0 0 4 0 0 0
1280 24 1280 158 157 85 1 1 256 5 0.003 53 57 1 0 4888 0 0 2 0 0 0
1536 25 3072 133 131 18 2 1 128 3 0.015 17 21 3 0 4858 0 0 78 0 0 0
1792 26 1792 138 137 696 1 1 256 7 0.003 30 33 29 0 4897 0 0 2 0 0 0
2048 27 12288 51 45 1190 6 1 32 1 0.187 16 19 1 0 4851 0 0 78 0 0 0
2560 28 12800 144 139 1259 5 1 128 5 0.039 26 29 14 0 4973 0 0 86 0 0 0
3072 29 0 82 82 79 0 0 64 3 1 14 18 13 0 4835 0 0 2 0 0 0
3584 30 0 104 104 3 0 0 128 7 1 3 8 3 0 4794 0 0 2 0 0 0
4096 31 4096 49 48 696 1 1 16 1 0.062 28 31 26 0 4887 0 0 2 0 0 0
5120 32 0 70 70 3 0 0 64 5 1 3 7 3 0 4793 0 0 2 0 0 0
6144 33 0 97 97 77 0 0 32 3 1 45 49 45 0 4961 0 0 2 0 0 0
7168 34 0 115 115 77 0 0 64 7 1 47 52 47 0 4970 0 0 2 0 0 0
8192 35 0 17 17 8 0 0 8 1 1 8 9 7 0 4808 0 0 2 0 0 0
10240 36 0 43 43 166 0 0 32 5 1 8 12 8 0 4813 0 0 2 0 0 0
12288 37 0 19 19 3 0 0 16 3 1 3 6 3 0 4792 0 0 2 0 0 0
14336 38 0 51 51 25 0 0 32 7 1 14 18 14 0 4837 0 0 2 0 0 0
16384 39 0 58 58 51 0 0 4 1 1 48 49 48 0 4970 0 0 2 0 0 0
20480 40 102400 265 260 133 5 1 16 5 0.312 126 137 15 1 5069 0 0 2 0 0 0
24576 41 0 14 14 4 0 0 8 3 1 3 5 4 0 4793 0 0 2 0 0 0
28672 42 0 18 18 2 0 0 16 7 1 2 4 2 0 4787 0 0 2 0 0 0
32768 43 0 26 26 23 0 0 2 1 1 17 18 20 1 4852 0 0 2 0 0 0
40960 44 40960 23 22 4456 1 1 8 5 0.125 14 16 2 1 4810 0 0 2 0 0 0
49152 45 0 10 10 1 0 0 4 3 1 1 2 3 0 4786 0 0 2 0 0 0
57344 46 114688 12 10 3 2 1 8 7 0.250 1 2 2 0 4785 0 0 4 0 0 0
65536 47 0 19 19 9 0 0 1 1 1 9 11 19 0 4949 0 0 182 0 0 0
81920 48 81920 21 20 11 1 1 4 5 0.250 11 13 3 1 4806 0 0 2 0 0 0
98304 49 0 19 19 10 0 0 2 3 1 10 12 14 0 4827 0 0 2 0 0 0
114688 50 0 10 10 1 0 0 4 7 1 1 3 3 0 4787 0 0 2 0 0 0
131072 51 0 16 16 6 0 0 1 2 1 6 9 16 0 4938 0 0 182 0 0 0
163840 52 163840 13 12 3 1 1 2 5 0.500 3 5 6 2 4796 0 0 2 0 0 0
196608 53 0 17 17 8 0 0 1 3 1 8 10 17 0 4829 0 0 2 0 0 0
229376 54 0 19 19 10 0 0 2 7 1 10 12 14 0 4827 0 0 2 0 0 0
large: size ind allocated nmalloc ndalloc nrequests curlextents
262144 55 0 3 3 3 0
327680 56 327680 1 0 1 1
393216 57 0 2 2 2 0
458752 58 0 7 7 7 0
524288 59 0 12 12 12 0
655360 60 0 1 1 1 0
---
917504 62 0 2 2 2 0
1048576 63 0 9 9 9 0
1310720 64 0 2 2 2 0
1572864 65 0 1 1 1 0
---
2097152 67 0 2 2 2 0
2621440 68 0 3 3 3 0
---
4194304 71 0 2 2 2 0
---
arenas[0]:
assigned threads: 1
uptime: 231890217422
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 0 38 477 4035
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 67969352 5762787 5550368 56729847
large: 327680 47 46 47
total: 68297032 5762834 5550414 56729894
active: 80347136
mapped: 97124352
retained: 137756672
base: 5086744
internal: 115720
metadata_thp: 0
tcache_bytes: 22552
resident: 85458944
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 2393 0 0 1 0 0 0
extent_avail 5033 0 0 27 0 0 0
extents_dirty 8193 0 0 76 0 0 0
extents_muzzy 3633 0 0 3 0 0 0
extents_retained 5495 0 0 37 0 0 0
decay_dirty 9512 0 0 87 0 0 0
decay_muzzy 9449 0 0 47 0 0 0
base 5671 0 0 3 0 0 0
tcache_list 2394 0 0 1 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2600 684052 683727 2150469 325 5 8192 1 0.007 7026 7229 63 36 10239547 0 0 342977 0 0 0
16 1 171296 1792524 1781818 17391053 10706 9 4096 1 0.290 14596 15507 310 344 15823232 0 0 711615 0 0 0
24 2 20136 1275992 1275153 9658803 839 24 8192 3 0.004 10671 11385 84 160 15133186 0 0 502995 0 0 0
32 3 3202336 280502 180429 14518792 100073 49 2048 1 0.997 3198 2268 108 69 16215667 0 0 100725 0 0 0
40 4 560 10438 10424 59653 14 1 8192 5 0.001 171 171 1 0 291085 0 0 301 0 0 0
48 5 12864 380 112 6412636 268 1 4096 3 0.065 66 18 1 0 2478 0 0 1 0 0 0
56 6 1400 3422 3397 55530 25 1 8192 7 0.003 676 692 1 0 72964 0 0 529 0 0 0
64 7 512 141 133 1376421 8 1 1024 1 0.007 21 24 1 0 2439 0 0 1 0 0 0
80 8 320 1077 1073 4619 4 1 4096 5 0.000 180 181 1 0 20007 0 0 87 0 0 0
96 9 10272 2938 2831 4234 107 1 2048 3 0.052 119 146 1 0 75011 0 0 353 0 0 0
112 10 112 2801 2800 1502249 1 1 4096 7 0.000 113 148 1 0 87623 0 0 175 0 0 0
128 11 0 149 149 200030 0 0 512 1 1 19 27 6 0 2831 0 0 87 0 0 0
160 12 160 1257317 1257316 1495331 1 1 2048 5 0.000 9912 10777 470 257 11473233 0 0 591419 0 0 0
192 13 384 131 129 674 2 1 1024 3 0.001 13 17 1 0 2424 0 0 1 0 0 0
224 14 224 114 113 200009 1 1 2048 7 0.000 8 12 1 0 2414 0 0 1 0 0 0
256 15 0 119 119 16 0 0 256 1 1 9 14 9 0 2434 0 0 1 0 0 0
320 16 5120 170332 170316 370024 16 1 1024 5 0.015 1791 1846 167 6 1398619 0 0 193 0 0 0
384 17 384 180 179 700083 1 1 512 3 0.001 54 59 1 0 2507 0 0 1 0 0 0
448 18 0 137 137 200019 0 0 1024 7 1 17 21 16 0 2463 0 0 1 0 0 0
512 19 512 113 112 17 1 1 128 1 0.007 9 14 1 0 2417 0 0 1 0 0 0
640 20 64000000 277716 177716 249930 100000 196 512 5 0.996 3046 2107 448 470 16139731 0 0 100561 0 0 0
768 21 0 138 138 170038 0 0 256 3 1 26 31 7 0 2464 0 0 1 0 0 0
896 22 0 153 153 77 0 0 512 7 1 41 45 40 0 2559 0 0 1 0 0 0
1024 23 1024 90 89 24 1 1 64 1 0.015 16 21 2 0 2434 0 0 3 0 0 0
1280 24 1280 158 157 85 1 1 256 5 0.003 53 57 1 0 2504 0 0 1 0 0 0
1536 25 3072 133 131 18 2 1 128 3 0.015 17 21 3 0 2474 0 0 77 0 0 0
1792 26 1792 138 137 696 1 1 256 7 0.003 30 33 29 0 2513 0 0 1 0 0 0
2048 27 12288 51 45 1190 6 1 32 1 0.187 16 19 1 0 2467 0 0 77 0 0 0
2560 28 12800 144 139 1259 5 1 128 5 0.039 26 29 14 0 2589 0 0 85 0 0 0
3072 29 0 82 82 79 0 0 64 3 1 14 18 13 0 2451 0 0 1 0 0 0
3584 30 0 104 104 3 0 0 128 7 1 3 8 3 0 2410 0 0 1 0 0 0
4096 31 4096 49 48 696 1 1 16 1 0.062 28 31 26 0 2503 0 0 1 0 0 0
5120 32 0 70 70 3 0 0 64 5 1 3 7 3 0 2409 0 0 1 0 0 0
6144 33 0 97 97 77 0 0 32 3 1 45 49 45 0 2577 0 0 1 0 0 0
7168 34 0 115 115 77 0 0 64 7 1 47 52 47 0 2586 0 0 1 0 0 0
8192 35 0 17 17 8 0 0 8 1 1 8 9 7 0 2424 0 0 1 0 0 0
10240 36 0 43 43 166 0 0 32 5 1 8 12 8 0 2429 0 0 1 0 0 0
12288 37 0 19 19 3 0 0 16 3 1 3 6 3 0 2408 0 0 1 0 0 0
14336 38 0 51 51 25 0 0 32 7 1 14 18 14 0 2453 0 0 1 0 0 0
16384 39 0 58 58 51 0 0 4 1 1 48 49 48 0 2586 0 0 1 0 0 0
20480 40 102400 265 260 133 5 1 16 5 0.312 126 137 15 1 2685 0 0 1 0 0 0
24576 41 0 14 14 4 0 0 8 3 1 3 5 4 0 2409 0 0 1 0 0 0
28672 42 0 18 18 2 0 0 16 7 1 2 4 2 0 2403 0 0 1 0 0 0
32768 43 0 26 26 23 0 0 2 1 1 17 18 20 1 2468 0 0 1 0 0 0
40960 44 40960 23 22 4456 1 1 8 5 0.125 14 16 2 1 2426 0 0 1 0 0 0
49152 45 0 10 10 1 0 0 4 3 1 1 2 3 0 2402 0 0 1 0 0 0
57344 46 114688 12 10 3 2 1 8 7 0.250 1 2 2 0 2401 0 0 3 0 0 0
65536 47 0 19 19 9 0 0 1 1 1 9 11 19 0 2565 0 0 181 0 0 0
81920 48 81920 21 20 11 1 1 4 5 0.250 11 13 3 1 2422 0 0 1 0 0 0
98304 49 0 19 19 10 0 0 2 3 1 10 12 14 0 2443 0 0 1 0 0 0
114688 50 0 10 10 1 0 0 4 7 1 1 3 3 0 2403 0 0 1 0 0 0
131072 51 0 16 16 6 0 0 1 2 1 6 9 16 0 2554 0 0 181 0 0 0
163840 52 163840 13 12 3 1 1 2 5 0.500 3 5 6 2 2412 0 0 1 0 0 0
196608 53 0 17 17 8 0 0 1 3 1 8 10 17 0 2445 0 0 1 0 0 0
229376 54 0 19 19 10 0 0 2 7 1 10 12 14 0 2443 0 0 1 0 0 0
large: size ind allocated nmalloc ndalloc nrequests curlextents
262144 55 0 3 3 3 0
327680 56 327680 1 0 1 1
393216 57 0 2 2 2 0
458752 58 0 7 7 7 0
524288 59 0 12 12 12 0
655360 60 0 1 1 1 0
---
917504 62 0 2 2 2 0
1048576 63 0 9 9 9 0
1310720 64 0 2 2 2 0
1572864 65 0 1 1 1 0
---
2097152 67 0 2 2 2 0
2621440 68 0 3 3 3 0
---
4194304 71 0 2 2 2 0
---
arenas[1]:
assigned threads: 1
uptime: 231070217012
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 0 0 0 0
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 0 0 0 0
large: 0 0 0 0
total: 0 0 0 0
active: 0
mapped: 16777216
retained: 0
base: 33256
internal: 0
metadata_thp: 0
tcache_bytes: 1344
resident: 65536
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 2384 0 0 1 0 0 0
extent_avail 2384 0 0 1 0 0 0
extents_dirty 2384 0 0 1 0 0 0
extents_muzzy 2384 0 0 1 0 0 0
extents_retained 2384 0 0 1 0 0 0
decay_dirty 2390 0 0 2 0 0 0
decay_muzzy 2390 0 0 2 0 0 0
base 4769 0 0 2 0 0 0
tcache_list 2385 0 0 2 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 0 0 0 0 0 0 8192 1 1 0 0 0 0 2384 0 0 1 0 0 0
16 1 0 0 0 0 0 0 4096 1 1 0 0 0 0 2384 0 0 1 0 0 0
24 2 0 0 0 0 0 0 8192 3 1 0 0 0 0 2384 0 0 1 0 0 0
32 3 0 0 0 0 0 0 2048 1 1 0 0 0 0 2384 0 0 1 0 0 0
40 4 0 0 0 0 0 0 8192 5 1 0 0 0 0 2384 0 0 1 0 0 0
48 5 0 0 0 0 0 0 4096 3 1 0 0 0 0 2384 0 0 1 0 0 0
56 6 0 0 0 0 0 0 8192 7 1 0 0 0 0 2384 0 0 1 0 0 0
64 7 0 0 0 0 0 0 1024 1 1 0 0 0 0 2384 0 0 1 0 0 0
80 8 0 0 0 0 0 0 4096 5 1 0 0 0 0 2384 0 0 1 0 0 0
96 9 0 0 0 0 0 0 2048 3 1 0 0 0 0 2384 0 0 1 0 0 0
112 10 0 0 0 0 0 0 4096 7 1 0 0 0 0 2384 0 0 1 0 0 0
128 11 0 0 0 0 0 0 512 1 1 0 0 0 0 2384 0 0 1 0 0 0
160 12 0 0 0 0 0 0 2048 5 1 0 0 0 0 2384 0 0 1 0 0 0
192 13 0 0 0 0 0 0 1024 3 1 0 0 0 0 2384 0 0 1 0 0 0
224 14 0 0 0 0 0 0 2048 7 1 0 0 0 0 2384 0 0 1 0 0 0
256 15 0 0 0 0 0 0 256 1 1 0 0 0 0 2384 0 0 1 0 0 0
320 16 0 0 0 0 0 0 1024 5 1 0 0 0 0 2384 0 0 1 0 0 0
384 17 0 0 0 0 0 0 512 3 1 0 0 0 0 2384 0 0 1 0 0 0
448 18 0 0 0 0 0 0 1024 7 1 0 0 0 0 2384 0 0 1 0 0 0
512 19 0 0 0 0 0 0 128 1 1 0 0 0 0 2384 0 0 1 0 0 0
640 20 0 0 0 0 0 0 512 5 1 0 0 0 0 2384 0 0 1 0 0 0
768 21 0 0 0 0 0 0 256 3 1 0 0 0 0 2384 0 0 1 0 0 0
896 22 0 0 0 0 0 0 512 7 1 0 0 0 0 2384 0 0 1 0 0 0
1024 23 0 0 0 0 0 0 64 1 1 0 0 0 0 2384 0 0 1 0 0 0
1280 24 0 0 0 0 0 0 256 5 1 0 0 0 0 2384 0 0 1 0 0 0
1536 25 0 0 0 0 0 0 128 3 1 0 0 0 0 2384 0 0 1 0 0 0
1792 26 0 0 0 0 0 0 256 7 1 0 0 0 0 2384 0 0 1 0 0 0
2048 27 0 0 0 0 0 0 32 1 1 0 0 0 0 2384 0 0 1 0 0 0
2560 28 0 0 0 0 0 0 128 5 1 0 0 0 0 2384 0 0 1 0 0 0
3072 29 0 0 0 0 0 0 64 3 1 0 0 0 0 2384 0 0 1 0 0 0
3584 30 0 0 0 0 0 0 128 7 1 0 0 0 0 2384 0 0 1 0 0 0
4096 31 0 0 0 0 0 0 16 1 1 0 0 0 0 2384 0 0 1 0 0 0
5120 32 0 0 0 0 0 0 64 5 1 0 0 0 0 2384 0 0 1 0 0 0
6144 33 0 0 0 0 0 0 32 3 1 0 0 0 0 2384 0 0 1 0 0 0
7168 34 0 0 0 0 0 0 64 7 1 0 0 0 0 2384 0 0 1 0 0 0
8192 35 0 0 0 0 0 0 8 1 1 0 0 0 0 2384 0 0 1 0 0 0
10240 36 0 0 0 0 0 0 32 5 1 0 0 0 0 2384 0 0 1 0 0 0
12288 37 0 0 0 0 0 0 16 3 1 0 0 0 0 2384 0 0 1 0 0 0
14336 38 0 0 0 0 0 0 32 7 1 0 0 0 0 2384 0 0 1 0 0 0
16384 39 0 0 0 0 0 0 4 1 1 0 0 0 0 2384 0 0 1 0 0 0
20480 40 0 0 0 0 0 0 16 5 1 0 0 0 0 2384 0 0 1 0 0 0
24576 41 0 0 0 0 0 0 8 3 1 0 0 0 0 2384 0 0 1 0 0 0
28672 42 0 0 0 0 0 0 16 7 1 0 0 0 0 2384 0 0 1 0 0 0
32768 43 0 0 0 0 0 0 2 1 1 0 0 0 0 2384 0 0 1 0 0 0
40960 44 0 0 0 0 0 0 8 5 1 0 0 0 0 2384 0 0 1 0 0 0
49152 45 0 0 0 0 0 0 4 3 1 0 0 0 0 2384 0 0 1 0 0 0
57344 46 0 0 0 0 0 0 8 7 1 0 0 0 0 2384 0 0 1 0 0 0
65536 47 0 0 0 0 0 0 1 1 1 0 0 0 0 2384 0 0 1 0 0 0
81920 48 0 0 0 0 0 0 4 5 1 0 0 0 0 2384 0 0 1 0 0 0
98304 49 0 0 0 0 0 0 2 3 1 0 0 0 0 2384 0 0 1 0 0 0
114688 50 0 0 0 0 0 0 4 7 1 0 0 0 0 2384 0 0 1 0 0 0
131072 51 0 0 0 0 0 0 1 2 1 0 0 0 0 2384 0 0 1 0 0 0
163840 52 0 0 0 0 0 0 2 5 1 0 0 0 0 2384 0 0 1 0 0 0
196608 53 0 0 0 0 0 0 1 3 1 0 0 0 0 2384 0 0 1 0 0 0
229376 54 0 0 0 0 0 0 2 7 1 0 0 0 0 2384 0 0 1 0 0 0
---
large: size ind allocated nmalloc ndalloc nrequests curlextents
---
--- End jemalloc statistics ---
[[0;31;49merr[0m]: Active defrag big list in tests/unit/memefficiency.tcl
defrag didn't stop.
# Memory
used_memory:229479624
used_memory_human:218.85M
used_memory_rss:256507904
used_memory_rss_human:244.62M
used_memory_peak:449650264
used_memory_peak_human:428.82M
used_memory_peak_perc:51.04%
used_memory_overhead:22066584
used_memory_startup:836576
used_memory_dataset:207413040
used_memory_dataset_perc:90.71%
allocator_allocated:229610360
allocator_active:246939648
allocator_resident:253034496
total_system_memory:10278404096
total_system_memory_human:9.57G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.08
allocator_frag_bytes:17329288
allocator_rss_ratio:1.02
allocator_rss_bytes:6094848
rss_overhead_ratio:1.01
rss_overhead_bytes:3473408
mem_fragmentation_ratio:1.12
mem_fragmentation_bytes:27069304
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:41000
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:65
lazyfree_pending_objects:0
lazyfreed_objects:0
# Stats
total_connections_received:1
total_commands_processed:960611
instantaneous_ops_per_sec:9
total_net_input_bytes:37285273
total_net_output_bytes:7182526
instantaneous_input_kbps:0.13
instantaneous_output_kbps:36.95
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:3
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
total_forks:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:442767
active_defrag_misses:31674819
active_defrag_key_hits:441465
active_defrag_key_misses:7588163
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
dump_payload_sanitizations:0
total_reads_processed:650402
total_writes_processed:650402
io_threaded_reads_processed:0
io_threaded_writes_processed:0
___ Begin jemalloc statistics ___
Version: "5.1.0-0-g0"
Build-time option settings
config.cache_oblivious: true
config.debug: false
config.fill: true
config.lazy_lock: false
config.malloc_conf: ""
config.prof: false
config.prof_libgcc: false
config.prof_libunwind: false
config.stats: true
config.utrace: false
config.xmalloc: false
Run-time option settings
opt.abort: false
opt.abort_conf: false
opt.retain: true
opt.dss: "secondary"
opt.narenas: 128
opt.percpu_arena: "disabled"
opt.metadata_thp: "disabled"
opt.background_thread: false (background_thread: true)
opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
opt.muzzy_decay_ms: 10000 (arenas.muzzy_decay_ms: 10000)
opt.junk: "false"
opt.zero: false
opt.tcache: true
opt.lg_tcache_max: 15
opt.thp: "default"
opt.stats_print: false
opt.stats_print_opts: ""
Arenas: 128
Quantum size: 8
Page size: 65536
Maximum thread-cached size class: 229376
Number of bin size classes: 55
Number of thread-cache bin size classes: 55
Number of large size classes: 180
Allocated: 229614456, active: 246939648, metadata: 6122144 (n_thp 0), resident: 253034496, mapped: 263716864, retained: 281542656
Background threads: 1, num_runs: 9, run_interval: 7234889777 ns
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
background_thread 1599 0 0 1 0 0 0
ctl 3194 0 0 1 0 0 0
prof 0 0 0 0 0 0 0
arenas[0]:
assigned threads: 1
uptime: 94880169427
dss allocation precedence: "secondary"
decaying: time npages sweeps madvises purged
dirty: 10000 0 4 88 3204
muzzy: 10000 0 0 0 0
allocated nmalloc ndalloc nrequests
small: 220898168 3030029 1737816 16794717
large: 8716288 7 5 7
total: 229614456 3030036 1737821 16794724
active: 246939648
mapped: 263716864
retained: 281542656
base: 6064792
internal: 57352
metadata_thp: 0
tcache_bytes: 77472
resident: 253034496
n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
large 799 0 0 1 0 0 0
extent_avail 3024 0 0 9 0 0 0
extents_dirty 3972 0 0 11 0 0 0
extents_muzzy 2350 0 0 3 0 0 0
extents_retained 3980 0 0 11 0 0 0
decay_dirty 3160 0 0 19 0 0 0
decay_muzzy 3154 0 0 19 0 0 0
base 3332 0 0 3 0 0 0
tcache_list 800 0 0 1 0 0 0
bins: size ind allocated nmalloc ndalloc nrequests curregs curslabs regs pgs util nfills nflushes nslabs nreslabs n_lock_ops n_waiting n_spin_acq n_owner_switch total_wait_ns max_wait_ns max_n_thds
8 0 2562608 642688 322362 644728 320326 79 8192 1 0.494 6441 3641 79 1 8041957 0 0 1985 0 0 0
16 1 5291312 874697 543990 5032746 330707 82 4096 1 0.984 6541 3642 159 82 8482238 0 0 441991 0 0 0
24 2 7699488 643484 322672 3232128 320812 79 8192 3 0.495 6452 3642 79 1 8042452 0 0 2469 0 0 0
32 3 1312 910 869 5100479 41 1 2048 1 0.020 36 42 1 0 878 0 0 1 0 0 0
40 4 560 100 86 81 14 1 8192 5 0.001 1 36 1 0 837 0 0 1 0 0 0
48 5 4320 1005 915 1920696 90 1 4096 3 0.021 40 38 1 0 878 0 0 1 0 0 0
56 6 1400 125 100 33 25 1 8192 7 0.003 2 36 1 0 838 0 0 1 0 0 0
64 7 512 156 148 23 8 1 1024 1 0.007 4 39 1 0 843 0 0 1 0 0 0
80 8 320 100 96 12 4 1 4096 5 0.000 1 35 1 0 836 0 0 1 0 0 0
96 9 10080 225 120 109 105 1 2048 3 0.051 6 38 1 0 844 0 0 1 0 0 0
112 10 112 100 99 5 1 1 4096 7 0.000 1 35 1 0 836 0 0 1 0 0 0
128 11 0 125 125 6 0 0 512 1 1 2 35 2 0 840 0 0 1 0 0 0
160 12 160 100 99 3 1 1 2048 5 0.000 1 34 1 0 835 0 0 1 0 0 0
192 13 576 206 203 355 3 1 1024 3 0.002 3 38 1 0 841 0 0 1 0 0 0
224 14 224 100 99 2 1 1 2048 7 0.000 1 33 1 0 834 0 0 1 0 0 0
256 15 0 150 150 5 0 0 256 1 1 2 35 2 0 840 0 0 1 0 0 0
320 16 5120 100 84 17 16 1 1024 5 0.015 1 33 1 0 834 0 0 1 0 0 0
384 17 384 100 99 1 1 1 512 3 0.001 1 33 1 0 834 0 0 1 0 0 0
448 18 0 0 0 0 0 0 1024 7 1 0 33 0 0 832 0 0 1 0 0 0
---
512 19 512 150 149 6 1 1 128 1 0.007 2 37 1 0 839 0 0 1 0 0 0
640 20 204806400 864687 544677 861420 320010 626 512 5 0.998 6433 3639 1252 627 8485151 0 0 443375 0 0 0
768 21 0 0 0 0 0 0 256 3 1 0 33 0 0 832 0 0 1 0 0 0
896 22 0 0 0 0 0 0 512 7 1 0 33 0 0 832 0 0 1 0 0 0
---
1024 23 0 96 96 4 0 0 64 1 1 2 37 2 0 842 0 0 1 0 0 0
1280 24 1280 100 99 1 1 1 256 5 0.003 1 33 1 0 834 0 0 1 0 0 0
1536 25 39936 100 74 2 26 1 128 3 0.203 1 34 1 0 835 0 0 1 0 0 0
1792 26 3584 106 104 427 2 1 256 7 0.007 2 38 2 0 842 0 0 1 0 0 0
2048 27 12288 56 50 515 6 1 32 1 0.187 5 40 1 0 845 0 0 1 0 0 0
2560 28 5120 106 104 438 2 1 128 5 0.015 2 38 2 0 842 0 0 1 0 0 0
3072 29 0 0 0 0 0 0 64 3 1 0 33 0 0 832 0 0 1 0 0 0
3584 30 0 0 0 0 0 0 128 7 1 0 33 0 0 832 0 0 1 0 0 0
---
4096 31 4096 20 19 456 1 1 16 1 0.062 3 38 3 0 845 0 0 1 0 0 0
5120 32 0 0 0 0 0 0 64 5 1 0 33 0 0 832 0 0 1 0 0 0
6144 33 0 0 0 0 0 0 32 3 1 0 33 0 0 832 0 0 1 0 0 0
7168 34 0 0 0 0 0 0 64 7 1 0 33 0 0 832 0 0 1 0 0 0
---
8192 35 0 10 10 1 0 0 8 1 1 1 36 2 0 840 0 0 1 0 0 0
10240 36 0 0 0 0 0 0 32 5 1 0 33 0 0 832 0 0 1 0 0 0
12288 37 0 0 0 0 0 0 16 3 1 0 33 0 0 832 0 0 1 0 0 0
14336 38 0 0 0 0 0 0 32 7 1 0 33 0 0 832 0 0 1 0 0 0
---
16384 39 0 20 20 2 0 0 4 1 1 2 36 6 0 849 0 0 1 0 0 0
20480 40 102400 36 31 6 5 1 16 5 0.312 3 36 2 1 841 0 0 1 0 0 0
24576 41 0 0 0 0 0 0 8 3 1 0 33 0 0 832 0 0 1 0 0 0
28672 42 0 0 0 0 0 0 16 7 1 0 33 0 0 832 0 0 1 0 0 0
---
32768 43 0 10 10 1 0 0 2 1 1 1 36 5 0 846 0 0 1 0 0 0
40960 44 40960 20 19 4 1 1 8 5 0.125 2 36 3 1 842 0 0 1 0 0 0
49152 45 0 0 0 0 0 0 4 3 1 0 33 0 0 832 0 0 1 0 0 0
---
57344 46 57344 1 0 1 1 1 8 7 0.125 0 33 1 0 834 0 0 1 0 0 0
65536 47 0 10 10 1 0 0 1 1 1 1 36 10 0 856 0 0 1 0 0 0
81920 48 81920 10 9 1 1 1 4 5 0.250 1 33 3 0 838 0 0 1 0 0 0
98304 49 0 0 0 0 0 0 2 3 1 0 33 0 0 832 0 0 1 0 0 0
114688 50 0 0 0 0 0 0 4 7 1 0 33 0 0 832 0 0 1 0 0 0
---
131072 51 0 10 10 1 0 0 1 2 1 1 36 10 0 856 0 0 1 0 0 0
163840 52 163840 10 9 1 1 1 2 5 0.500 1 33 5 0 842 0 0 1 0 0 0
196608 53 0 0 0 0 0 0 1 3 1 0 33 0 0 832 0 0 1 0 0 0
229376 54 0 0 0 0 0 0 2 7 1 0 33 0 0 832 0 0 1 0 0 0
---
large: size ind allocated nmalloc ndalloc nrequests curlextents
262144 55 0 1 1 1 0
327680 56 327680 1 0 1 1
---
524288 59 0 1 1 1 0
---
1048576 63 0 1 1 1 0
---
2097152 67 0 1 1 1 0
---
4194304 71 0 1 1 1 0
---
8388608 75 8388608 1 0 1 1
---
--- End jemalloc statistics ---
Logged warnings (pid 26752):
(none)
[[0;31;49merr[0m]: Active defrag edge case in tests/unit/memefficiency.tcl
defrag didn't stop.
[1/1 [0;33;49mdone[0m]: defrag (327 seconds)
The End
Execution time of different units:
4 seconds - unit/memefficiency
327 seconds - defrag
[1;31;49m!!! WARNING[0m The following tests failed:
*** [[0;31;49merr[0m]: Active defrag in tests/unit/memefficiency.tcl
defrag didn't stop.
*** [[0;31;49merr[0m]: Active defrag big keys in tests/unit/memefficiency.tcl
defrag didn't stop.
*** [[0;31;49merr[0m]: Active defrag big list in tests/unit/memefficiency.tcl
defrag didn't stop.
*** [[0;31;49merr[0m]: Active defrag edge case in tests/unit/memefficiency.tcl
defrag didn't stop.
Cleanup: may take some time... OK
Comment From: oranagra
@moria7757 can you please try this:
gcc -DREDIS_TEST -DREDIS_TEST_MAIN src/crc64.c src/crcspeed.c -std=c99
./a.out
Comment From: moria7757
@moria7757 can you please try this:
gcc -DREDIS_TEST -DREDIS_TEST_MAIN src/crc64.c src/crcspeed.c -std=c99 ./a.out
you mean run this: gcc -DREDIS_TEST -DREDIS_TEST_MAIN src/crc64.c src/crcspeed.c -std=c99 > ./a.out
if you mean above, content of a.out is not readable
Comment From: oranagra
i meant to execute a.out, but never mind. i was able to reproduce it with qemu a bigendian arm. found the bug in the crc code, PR soon.
Comment From: oranagra
fix for the CRC error in https://github.com/redis/redis/pull/8270. next, i'll be looking into that active defrag issue.
Comment From: moria7757
i meant to execute a.out, but never mind. i was able to reproduce it with qemu a bigendian arm. found the bug in the crc code, PR soon.
./a.out output: [calcula]: e9c6d914c4b8d9ca == e9c6d914c4b8d9ca [64speed]: e9c6d914c4b8d9ca == 0000000000000000 [calcula]: c7794709e69683b3 == c7794709e69683b3 [64speed]: c7794709e69683b3 == 0000000000000000
Comment From: moria7757
fix for the CRC error in #8270. next, i'll be looking into that active defrag issue.
i have changed crcspeed.c file & CRC error disappeared but defrag issue still exists.
Comment From: oranagra
@moria7757 maybe it's just a timing issue. please try to add more time to the test:
--- a/tests/unit/memefficiency.tcl
+++ b/tests/unit/memefficiency.tcl
of-rewrite-percent
}
# Wait for the active defrag to stop working.
- wait_for_condition 150 100 {
+ wait_for_condition 15000 100 {
[s active_defrag_running] eq 0
} else {
after 120 ;# serverCron only updates the info once in 100ms
and run
./runtest --single unit/memefficiency --verbose --no-latency --only "Active defrag"
Comment From: moria7757
@moria7757 maybe it's just a timing issue. please try to add more time to the test:
```diff --- a/tests/unit/memefficiency.tcl +++ b/tests/unit/memefficiency.tcl of-rewrite-percent }
# Wait for the active defrag to stop working.
- wait_for_condition 150 100 {
- wait_for_condition 15000 100 { [s active_defrag_running] eq 0 } else { after 120 ;# serverCron only updates the info once in 100ms ```
and run
./runtest --single unit/memefficiency --verbose --no-latency --only "Active defrag"
./runtest --single unit/memefficiency --verbose --no-latency --only "Active defrag" output:
Cleanup: may take some time... OK Starting test server at port 21079
Testing unit/memefficiency
=== (memefficiency) Starting server 127.0.0.1:21111 ok [skip]: Memory efficiency with values in range 32 [skip]: Memory efficiency with values in range 64 [skip]: Memory efficiency with values in range 128 [skip]: Memory efficiency with values in range 1024 [skip]: Memory efficiency with values in range 16384
Testing solo test === (defrag) Starting server 127.0.0.1:21112 ok frag 1.58 [TIMEOUT]: clients state report follows. sock6 => (IN PROGRESS) Active defrag Killing still running Redis server 47684
The End
Execution time of different units: 0 seconds - unit/memefficiency
!!! WARNING The following tests failed:
*** [TIMEOUT]: clients state report follows. Cleanup: may take some time... OK
Comment From: oranagra
i'm still unable to reproduce this. @moria7757 can you please try this:
diff --git a/tests/unit/memefficiency.tcl b/tests/unit/memefficiency.tcl
index 357089c8f..6572e0926 100644
--- a/tests/unit/memefficiency.tcl
+++ b/tests/unit/memefficiency.tcl
@@ -48,6 +48,7 @@ start_server {tags {"defrag"} overrides {appendonly yes auto-aof-rewrite-percent
r config set active-defrag-ignore-bytes 2mb
r config set maxmemory 100mb
r config set maxmemory-policy allkeys-lru
+ r config set loglevel debug
populate 700000 asdf1 150
populate 170000 asdf2 300
@@ -58,6 +59,9 @@ start_server {tags {"defrag"} overrides {appendonly yes auto-aof-rewrite-percent
}
assert {$frag >= 1.4}
+ puts "before"
+ puts [r memory malloc-stats]
+ puts "\n\n\n"
r config set latency-monitor-threshold 5
r latency reset
r config set maxmemory 110mb ;# prevent further eviction (not to fail the digest test)
@@ -77,8 +81,17 @@ start_server {tags {"defrag"} overrides {appendonly yes auto-aof-rewrite-percent
[s active_defrag_running] eq 0
} else {
after 120 ;# serverCron only updates the info once in 100ms
+ puts "didn't stop"
puts [r info memory]
+ puts [r info stats]
+ puts [r memory malloc-stats]
+ puts "waiting another 10 seconds\n\n\n"
+ after 10000
+ puts [r info memory]
+ puts [r info stats]
puts [r memory malloc-stats]
+ set stdout [srv 0 stdout]
+ puts [exec tail -n 100 < $stdout]
fail "defrag didn't stop."
}
./runtest --single unit/memefficiency --only "Active defrag" --verbose
Comment From: moria7757
i'm still unable to reproduce this. @moria7757 can you please try this:
```diff diff --git a/tests/unit/memefficiency.tcl b/tests/unit/memefficiency.tcl index 357089c8f..6572e0926 100644 --- a/tests/unit/memefficiency.tcl +++ b/tests/unit/memefficiency.tcl @@ -48,6 +48,7 @@ start_server {tags {"defrag"} overrides {appendonly yes auto-aof-rewrite-percent r config set active-defrag-ignore-bytes 2mb r config set maxmemory 100mb r config set maxmemory-policy allkeys-lru + r config set loglevel debug
populate 700000 asdf1 150 populate 170000 asdf2 300@@ -58,6 +59,9 @@ start_server {tags {"defrag"} overrides {appendonly yes auto-aof-rewrite-percent } assert {$frag >= 1.4}
- puts "before"
- puts [r memory malloc-stats]
- puts "\n\n\n" r config set latency-monitor-threshold 5 r latency reset r config set maxmemory 110mb ;# prevent further eviction (not to fail the digest test) @@ -77,8 +81,17 @@ start_server {tags {"defrag"} overrides {appendonly yes auto-aof-rewrite-percent [s active_defrag_running] eq 0 } else { after 120 ;# serverCron only updates the info once in 100ms
- puts "didn't stop" puts [r info memory]
- puts [r info stats]
- puts [r memory malloc-stats]
- puts "waiting another 10 seconds\n\n\n"
- after 10000
- puts [r info memory]
- puts [r info stats] puts [r memory malloc-stats]
- set stdout [srv 0 stdout]
- puts [exec tail -n 100 < $stdout] fail "defrag didn't stop." }
```
./runtest --single unit/memefficiency --only "Active defrag" --verbose
i cant find this section: @@ -77,8 +81,17 @@ i am editing this file: tests/unit/memefficiency.tcl am i right?
this is content of my file now based on previous and current changes:
proc test_memory_efficiency {range} { r flushall set rd [redis_deferring_client] set base_mem [s used_memory] set written 0 for {set j 0} {$j < 10000} {incr j} { set key key:$j set val [string repeat A [expr {int(rand()*$range)}]] $rd set $key $val incr written [string length $key] incr written [string length $val] incr written 2 ;# A separator is the minimum to store key-value data. } for {set j 0} {$j < 10000} {incr j} { $rd read ; # Discard replies }
set current_mem [s used_memory]
set used [expr {$current_mem-$base_mem}]
set efficiency [expr {double($written)/$used}]
return $efficiency
}
start_server {tags {"memefficiency"}} { foreach {size_range expected_min_efficiency} { 32 0.15 64 0.25 128 0.35 1024 0.75 16384 0.82 } { test "Memory efficiency with values in range $size_range" { set efficiency [test_memory_efficiency $size_range] assert {$efficiency >= $expected_min_efficiency} } } }
run_solo {defrag} { start_server {tags {"defrag"} overrides {appendonly yes auto-aof-rewrite-percentage 0 save ""}} { if {[string match {jemalloc} [s mem_allocator]]} { test "Active defrag" { r config set hz 100 r config set activedefrag no r config set active-defrag-threshold-lower 5 r config set active-defrag-cycle-min 65 r config set active-defrag-cycle-max 75 r config set active-defrag-ignore-bytes 2mb r config set maxmemory 100mb r config set maxmemory-policy allkeys-lru r config set loglevel debug
populate 700000 asdf1 150
populate 170000 asdf2 300
after 120 ;# serverCron only updates the info once in 100ms
set frag [s allocator_frag_ratio]
if {$::verbose} {
puts "frag $frag"
}
assert {$frag >= 1.4}
puts "before"
puts [r memory malloc-stats]
puts "\n\n\n"
r config set latency-monitor-threshold 5
r latency reset
r config set maxmemory 110mb ;# prevent further eviction (not to fail the digest test)
set digest [r debug digest]
catch {r config set activedefrag yes} e
if {[r config get activedefrag] eq "activedefrag yes"} {
# Wait for the active defrag to start working (decision once a
# second).
wait_for_condition 50 100 {
[s active_defrag_running] ne 0
} else {
fail "defrag not started."
}
# Wait for the active defrag to stop working.
# wait_for_condition 150 100 {}
wait_for_condition 15000 100 {
[s active_defrag_running] eq 0
} else {
after 120 ;# serverCron only updates the info once in 100ms
puts [r info memory]
puts [r memory malloc-stats]
fail "defrag didn't stop."
}
# Test the the fragmentation is lower.
after 120 ;# serverCron only updates the info once in 100ms
set frag [s allocator_frag_ratio]
set max_latency 0
foreach event [r latency latest] {
lassign $event eventname time latency max
if {$eventname == "active-defrag-cycle"} {
set max_latency $max
}
}
if {$::verbose} {
puts "frag $frag"
set misses [s active_defrag_misses]
set hits [s active_defrag_hits]
puts "hits: $hits"
puts "misses: $misses"
puts "max latency $max_latency"
puts [r latency latest]
puts [r latency history active-defrag-cycle]
}
assert {$frag < 1.1}
# due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,
# we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher
if {!$::no_latency} {
assert {$max_latency <= 30}
}
}
# verify the data isn't corrupted or changed
set newdigest [r debug digest]
assert {$digest eq $newdigest}
r save ;# saving an rdb iterates over all the data / pointers
# if defrag is supported, test AOF loading too
if {[r config get activedefrag] eq "activedefrag yes"} {
# reset stats and load the AOF file
r config resetstat
r config set key-load-delay -50 ;# sleep on average 1/50 usec
r debug loadaof
r config set activedefrag no
# measure hits and misses right after aof loading
set misses [s active_defrag_misses]
set hits [s active_defrag_hits]
after 120 ;# serverCron only updates the info once in 100ms
set frag [s allocator_frag_ratio]
set max_latency 0
foreach event [r latency latest] {
lassign $event eventname time latency max
if {$eventname == "loading-cron"} {
set max_latency $max
}
}
if {$::verbose} {
puts "AOF loading:"
puts "frag $frag"
puts "hits: $hits"
puts "misses: $misses"
puts "max latency $max_latency"
puts [r latency latest]
puts [r latency history loading-cron]
}
# make sure we had defrag hits during AOF loading
assert {$hits > 100000}
# make sure the defragger did enough work to keep the fragmentation low during loading.
# we cannot check that it went all the way down, since we don't wait for full defrag cycle to complete.
assert {$frag < 1.4}
# since the AOF contains simple (fast) SET commands (and the cron during loading runs every 1000 commands),
# it'll still not block the loading for long periods of time.
if {!$::no_latency} {
assert {$max_latency <= 30}
}
}
}
r config set appendonly no
r config set key-load-delay 0
test "Active defrag big keys" {
r flushdb
r config resetstat
r config set hz 100
r config set activedefrag no
r config set active-defrag-max-scan-fields 1000
r config set active-defrag-threshold-lower 5
r config set active-defrag-cycle-min 65
r config set active-defrag-cycle-max 75
r config set active-defrag-ignore-bytes 2mb
r config set maxmemory 0
r config set list-max-ziplist-size 5 ;# list of 10k items will have 2000 quicklist nodes
r config set stream-node-max-entries 5
r hmset hash h1 v1 h2 v2 h3 v3
r lpush list a b c d
r zadd zset 0 a 1 b 2 c 3 d
r sadd set a b c d
r xadd stream * item 1 value a
r xadd stream * item 2 value b
r xgroup create stream mygroup 0
r xreadgroup GROUP mygroup Alice COUNT 1 STREAMS stream >
# create big keys with 10k items
set rd [redis_deferring_client]
for {set j 0} {$j < 10000} {incr j} {
$rd hset bighash $j [concat "asdfasdfasdf" $j]
$rd lpush biglist [concat "asdfasdfasdf" $j]
$rd zadd bigzset $j [concat "asdfasdfasdf" $j]
$rd sadd bigset [concat "asdfasdfasdf" $j]
$rd xadd bigstream * item 1 value a
}
for {set j 0} {$j < 50000} {incr j} {
$rd read ; # Discard replies
}
set expected_frag 1.7
if {$::accurate} {
# scale the hash to 1m fields in order to have a measurable the latency
for {set j 10000} {$j < 1000000} {incr j} {
$rd hset bighash $j [concat "asdfasdfasdf" $j]
}
for {set j 10000} {$j < 1000000} {incr j} {
$rd read ; # Discard replies
}
# creating that big hash, increased used_memory, so the relative frag goes down
set expected_frag 1.3
}
# add a mass of string keys
for {set j 0} {$j < 500000} {incr j} {
$rd setrange $j 150 a
}
for {set j 0} {$j < 500000} {incr j} {
$rd read ; # Discard replies
}
assert_equal [r dbsize] 500010
# create some fragmentation
for {set j 0} {$j < 500000} {incr j 2} {
$rd del $j
}
for {set j 0} {$j < 500000} {incr j 2} {
$rd read ; # Discard replies
}
assert_equal [r dbsize] 250010
# start defrag
after 120 ;# serverCron only updates the info once in 100ms
set frag [s allocator_frag_ratio]
if {$::verbose} {
puts "frag $frag"
}
assert {$frag >= $expected_frag}
r config set latency-monitor-threshold 5
r latency reset
set digest [r debug digest]
catch {r config set activedefrag yes} e
if {[r config get activedefrag] eq "activedefrag yes"} {
# wait for the active defrag to start working (decision once a second)
wait_for_condition 50 100 {
[s active_defrag_running] ne 0
} else {
fail "defrag not started."
}
# wait for the active defrag to stop working
wait_for_condition 500 100 {
[s active_defrag_running] eq 0
} else {
after 120 ;# serverCron only updates the info once in 100ms
puts [r info memory]
puts [r memory malloc-stats]
fail "defrag didn't stop."
}
# test the the fragmentation is lower
after 120 ;# serverCron only updates the info once in 100ms
set frag [s allocator_frag_ratio]
set max_latency 0
foreach event [r latency latest] {
lassign $event eventname time latency max
if {$eventname == "active-defrag-cycle"} {
set max_latency $max
}
}
if {$::verbose} {
puts "frag $frag"
set misses [s active_defrag_misses]
set hits [s active_defrag_hits]
puts "hits: $hits"
puts "misses: $misses"
puts "max latency $max_latency"
puts [r latency latest]
puts [r latency history active-defrag-cycle]
}
assert {$frag < 1.1}
# due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,
# we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher
if {!$::no_latency} {
assert {$max_latency <= 30}
}
}
# verify the data isn't corrupted or changed
set newdigest [r debug digest]
assert {$digest eq $newdigest}
r save ;# saving an rdb iterates over all the data / pointers
} {OK}
test "Active defrag big list" {
r flushdb
r config resetstat
r config set hz 100
r config set activedefrag no
r config set active-defrag-max-scan-fields 1000
r config set active-defrag-threshold-lower 5
r config set active-defrag-cycle-min 65
r config set active-defrag-cycle-max 75
r config set active-defrag-ignore-bytes 2mb
r config set maxmemory 0
r config set list-max-ziplist-size 5 ;# list of 500k items will have 100k quicklist nodes
# create big keys with 10k items
set rd [redis_deferring_client]
set expected_frag 1.7
# add a mass of list nodes to two lists (allocations are interlaced)
set val [string repeat A 100] ;# 5 items of 100 bytes puts us in the 640 bytes bin, which has 32 regs, so high potential for fragmentation
set elements 500000
for {set j 0} {$j < $elements} {incr j} {
$rd lpush biglist1 $val
$rd lpush biglist2 $val
}
for {set j 0} {$j < $elements} {incr j} {
$rd read ; # Discard replies
$rd read ; # Discard replies
}
# create some fragmentation
r del biglist2
# start defrag
after 120 ;# serverCron only updates the info once in 100ms
set frag [s allocator_frag_ratio]
if {$::verbose} {
puts "frag $frag"
}
assert {$frag >= $expected_frag}
r config set latency-monitor-threshold 5
r latency reset
set digest [r debug digest]
catch {r config set activedefrag yes} e
if {[r config get activedefrag] eq "activedefrag yes"} {
# wait for the active defrag to start working (decision once a second)
wait_for_condition 50 100 {
[s active_defrag_running] ne 0
} else {
fail "defrag not started."
}
# wait for the active defrag to stop working
wait_for_condition 500 100 {
[s active_defrag_running] eq 0
} else {
after 120 ;# serverCron only updates the info once in 100ms
puts [r info memory]
puts [r info stats]
puts [r memory malloc-stats]
fail "defrag didn't stop."
}
# test the the fragmentation is lower
after 120 ;# serverCron only updates the info once in 100ms
set misses [s active_defrag_misses]
set hits [s active_defrag_hits]
set frag [s allocator_frag_ratio]
set max_latency 0
foreach event [r latency latest] {
lassign $event eventname time latency max
if {$eventname == "active-defrag-cycle"} {
set max_latency $max
}
}
if {$::verbose} {
puts "frag $frag"
puts "misses: $misses"
puts "hits: $hits"
puts "max latency $max_latency"
puts [r latency latest]
puts [r latency history active-defrag-cycle]
}
assert {$frag < 1.1}
# due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,
# we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher
if {!$::no_latency} {
assert {$max_latency <= 30}
}
# in extreme cases of stagnation, we see over 20m misses before the tests aborts with "defrag didn't stop",
# in normal cases we only see 100k misses out of 500k elements
assert {$misses < $elements}
}
# verify the data isn't corrupted or changed
set newdigest [r debug digest]
assert {$digest eq $newdigest}
r save ;# saving an rdb iterates over all the data / pointers
r del biglist1 ;# coverage for quicklistBookmarksClear
} {1}
test "Active defrag edge case" {
# there was an edge case in defrag where all the slabs of a certain bin are exact the same
# % utilization, with the exception of the current slab from which new allocations are made
# if the current slab is lower in utilization the defragger would have ended up in stagnation,
# keept running and not move any allocation.
# this test is more consistent on a fresh server with no history
start_server {tags {"defrag"} overrides {save ""}} {
r flushdb
r config resetstat
r config set hz 100
r config set activedefrag no
r config set active-defrag-max-scan-fields 1000
r config set active-defrag-threshold-lower 5
r config set active-defrag-cycle-min 65
r config set active-defrag-cycle-max 75
r config set active-defrag-ignore-bytes 1mb
r config set maxmemory 0
set expected_frag 1.3
r debug mallctl-str thread.tcache.flush VOID
# fill the first slab containin 32 regs of 640 bytes.
for {set j 0} {$j < 32} {incr j} {
r setrange "_$j" 600 x
r debug mallctl-str thread.tcache.flush VOID
}
# add a mass of keys with 600 bytes values, fill the bin of 640 bytes which has 32 regs per slab.
set rd [redis_deferring_client]
set keys 640000
for {set j 0} {$j < $keys} {incr j} {
$rd setrange $j 600 x
}
for {set j 0} {$j < $keys} {incr j} {
$rd read ; # Discard replies
}
# create some fragmentation of 50%
set sent 0
for {set j 0} {$j < $keys} {incr j 1} {
$rd del $j
incr sent
incr j 1
}
for {set j 0} {$j < $sent} {incr j} {
$rd read ; # Discard replies
}
# create higher fragmentation in the first slab
for {set j 10} {$j < 32} {incr j} {
r del "_$j"
}
# start defrag
after 120 ;# serverCron only updates the info once in 100ms
set frag [s allocator_frag_ratio]
if {$::verbose} {
puts "frag $frag"
}
assert {$frag >= $expected_frag}
set digest [r debug digest]
catch {r config set activedefrag yes} e
if {[r config get activedefrag] eq "activedefrag yes"} {
# wait for the active defrag to start working (decision once a second)
wait_for_condition 50 100 {
[s active_defrag_running] ne 0
} else {
fail "defrag not started."
}
# wait for the active defrag to stop working
wait_for_condition 500 100 {
[s active_defrag_running] eq 0
} else {
after 120 ;# serverCron only updates the info once in 100ms
puts [r info memory]
puts [r info stats]
puts [r memory malloc-stats]
fail "defrag didn't stop."
}
# test the the fragmentation is lower
after 120 ;# serverCron only updates the info once in 100ms
set misses [s active_defrag_misses]
set hits [s active_defrag_hits]
set frag [s allocator_frag_ratio]
if {$::verbose} {
puts "frag $frag"
puts "hits: $hits"
puts "misses: $misses"
}
assert {$frag < 1.1}
assert {$misses < 10000000} ;# when defrag doesn't stop, we have some 30m misses, when it does, we have 2m misses
}
# verify the data isn't corrupted or changed
set newdigest [r debug digest]
assert {$digest eq $newdigest}
r save ;# saving an rdb iterates over all the data / pointers
}
}
}
} } ;# run_solo
Comment From: oranagra
@moria7757 patch didn't apply because you still have the previous edit in your file:
# wait_for_condition 150 100 {}
wait_for_condition 15000 100 {
if you're revert that, and get the original version of that file, the patch will apply.
Comment From: moria7757
@moria7757 patch didn't apply because you still have the previous edit in your file:
tcl # wait_for_condition 150 100 {} wait_for_condition 15000 100 {if you're revert that, and get the original version of that file, the patch will apply.
excuse me, i'm confused. based on you reply,first i added 'wait_for_condition 15000 100 {' & commented 'wait_for_condition 150 100 {' like '# wait_for_condition 150 100 {}' in tests/unit/memefficiency.tcl file and run the command but i got error again. second i added 'r config set loglevel debug','puts "before"','puts [r memory malloc-stats]','puts "\n\n\n"' but i dont know where should i add followings because i cant find the section you mentioned in tests/unit/memefficiency.tcl file : puts "didn't stop" puts [r info stats] puts [r memory malloc-stats] puts "waiting another 10 seconds\n\n\n" after 10000 puts [r info memory] puts [r info stats] set stdout [srv 0 stdout] puts [exec tail -n 100 < $stdout]
and i didnt run the test again based on second changes. can you please describe that what should you do?
Comment From: oranagra
@moria7757 please download and unzip this file into tests/unit
and run ./runtest --single unit/defrag --verbose
defrag.tcl.gz
please upload the result as a file.
Comment From: oranagra
sorry, my bad. it got compressed twice. here's a fixed one: defrag.tcl.gz
Comment From: moria7757
@moria7757 please download and unzip this file into
tests/unitand run./runtest --single unit/defrag --verbosedefrag.tcl.gzplease upload the result as a file.
i uploaded the output: https://github.com/moria7757/moria/blob/main/defrag.log
Comment From: oranagra
ohh, only now i notice Page size: 65536 (usually 4096).
the defragger works good, i guess we just need to lower the thresholds.
Comment From: oranagra
@yossigo what do you think? should we adjust all the thresholds of the defrag test in order to make it compatible with systems with large pages, or maybe just skip the test on these?
Comment From: moria7757
@oranagra this means i should wait for new stable version?
Comment From: oranagra
@moria7757 you can safely ignore this error, it doesn't indicate any problem in redis. it's just that the test expects the defragger to bring fragmentation down to some level, but on a system with pages that big (64k rather than 4k) it is not able to reach the target.
next version will simply skip that test when the page size is bigger tan 8k
Comment From: Prikalel
facing this error on main branch you said to ignore ok