I am using exact instructions from the sentinel documentation page on how to setup configuration files (But ports start at 5001, not 5000). I am using the default configuration files with modifications only to where the log files (/var/log/redis_5001.log etc) and pid's are. I am using daemonize yes.

Redis Master - x.x.x.230 5001 Redis Slave - x.x.x.231 5001 Redis Slave - x.x.x.232 5001

Redis is functioning fine with read-only slaves and writes to master etc.

However, I start sentinels with this configuration. I am running sentinel as a service on each service that runs redis, side by side.

port 6001
dir "/tmp"
pidfile "/var/run/sentinel_6001.pid"
logfile "/var/log/sentinel_6001.log"
daemonize yes
sentinel monitor mymaster x.x.x.230 5001 2
sentinel down-after-milliseconds mymaster 3100
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000`

After all 3 instances are started up:

sentinel ckquorum mymaster returns 1 sentinel available sentinel sentinels mymaster returns that all other sentinels are sdown and disconnected.

I use the exact same configuration file for sentinel, downgrade it to 3.0.7, fix broken things that were added in the redis config like protected-mode and everything is working just fine.

I do not know why, just spent all day trying to figure out what was wrong.

Comment From: antirez

Hello, what happens if you try to access Sentinel A from Sentinel B host using redis-cli PING? Are you able to get a PONG reply?

Comment From: kglee79

Experiencing the same issue with latest redis stable as of yesterday. The sentinel configs are re-written by sentinel and contain the ip's of all the other sentinels automatically, however after discovering each other they all go to sdown state.

7938:X 10 Aug 19:40:22.263 # Sentinel ID is c68792507332aec0fc22e08b61102b5b9f44d21e
7938:X 10 Aug 19:40:22.263 # +monitor master mymaster 172.20.20.36 6379 quorum 2
7938:X 10 Aug 19:40:22.265 * +slave slave 172.20.21.169:6379 172.20.21.169 6379 @ mymaster 172.20.20.36 6379
7938:X 10 Aug 19:40:22.268 * +slave slave 172.20.22.156:6379 172.20.22.156 6379 @ mymaster 172.20.20.36 6379
7938:X 10 Aug 19:40:24.103 * +sentinel sentinel e0eda0be5ff1aa35268a284fabc8308c582446cb 172.20.20.247 26379 @ mymaster 172.20.20.36 6379
7938:X 10 Aug 19:40:25.699 * +sentinel sentinel a124b6be4c872cba80ebfcc36abcc41db68627c4 172.20.20.246 26379 @ mymaster 172.20.20.36 6379
7938:X 10 Aug 19:40:34.109 # +sdown sentinel e0eda0be5ff1aa35268a284fabc8308c582446cb 172.20.20.247 26379 @ mymaster 172.20.20.36 6379
7938:X 10 Aug 19:40:35.743 # +sdown sentinel a124b6be4c872cba80ebfcc36abcc41db68627c4 172.20.20.246 26379 @ mymaster 172.20.20.36 6379

Tried with sentinel deployed to redis instances and also sentinel on completely different instances that had nothing else on them. Deployed on Ubuntu 16.04.

One of the sentinel configs

daemonize yes
logfile "/var/log/redis/redis-sentinel.log"
dir "/var/lib/redis/sentinel"

sentinel myid c68792507332aec0fc22e08b61102b5b9f44d21e
sentinel monitor mymaster 172.20.20.36 6379 2
sentinel down-after-milliseconds mymaster 10000
sentinel failover-timeout mymaster 30000
# Generated by CONFIG REWRITE
port 26379
maxclients 4064
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel known-slave mymaster 172.20.22.156 6379
sentinel known-slave mymaster 172.20.21.169 6379
sentinel known-sentinel mymaster 172.20.20.247 26379 e0eda0be5ff1aa35268a284fabc8308c582446cb
sentinel known-sentinel mymaster 172.20.20.246 26379 a124b6be4c872cba80ebfcc36abcc41db68627c4
sentinel current-epoch 0

Comment From: digitalpacman

That is the behavior I was experiencing.

Comment From: digitalpacman

@antirez Sorry to leave you hanging. I actually got tired of 3.2.x so I am now running 3.0.7 and everything's working grand.

Comment From: kglee79

I tried the PING suggested and got the following error...

root@ip-172-20-20-247:~# redis-cli -h 172.20.20.245 -p 26379
172.20.20.245:26379> ping
(error) DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.

bind was set in redis.conf, but not sentinel.conf. Once I set the "bind" property in sentinel.conf it worked. Examples should be updated to indicate that "bind" is required to be set in sentinel.conf if in protected mode, however this seems like a bug since it was working without setting "bind" in previous versions.

Comment From: wmene

@kglee79 This is due to protected-mode being on, discussed in issue #3279. Adding bind directive to sentinel.conf is workaround for the moment

Comment From: fillest

Wow. I've just killed ~3 hours trying to understand what's happening. Re-checked everything many times. The ports were opened; sockets were listening on 0.0.0.0; redis-cli to redises on other instances working; but redis-cli to sentinels on other instances showing Error: Connection reset by peer (I was not getting DENIED). After adding to sentinel's config

protected-mode no
bind 0.0.0.0

everything works.

This should be documented, I can't find that sentinel even supports protected-mode or bind.

Comment From: larrywax

Please, document better this misbehaviour, I spent almost the entire day at work figuring this out...

Comment From: thethomp

I also spent way too much time trying to troubleshoot this before landing on this page. Please update documentation, this was very frustrating.

Comment From: jschunlei

I met the same situation.

[root@redis1 etc]# cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"

yum install redis redis-cli -h 192.168.136.177 -p 26379 ping result: Error: Connection reset by peer

redis-cli -h 127.0.0.1 -p 26379 ping result: PONG

netstat -tanp | grep 26379 result: tcp 0 0 0.0.0.0:26379 0.0.0.0: LISTEN 13506/redis-sentine tcp6 0 0 :::26379 ::: LISTEN 13506/redis-sentine

redis-sentinel.conf

[root@redis1 etc]# cat redis-sentinel.conf | grep ^[^#] port 26379 dir "/data/hps/redis/data" sentinel myid 9af8a479648e294b1aee48a33e73bcd626fcb627 sentinel monitor hps 192.168.136.175 6379 2 sentinel down-after-milliseconds hps 60000 sentinel failover-timeout hps 120000 sentinel auth-pass hps huchunlei sentinel config-epoch hps 0 sentinel leader-epoch hps 0 logfile "/data/hps/redis/logs/sentinel.log" sentinel known-slave hps 192.168.136.177 6379 sentinel known-slave hps 192.168.136.176 6379 sentinel known-sentinel hps 192.168.136.176 26379 4fa9dcb37e2f0f82566f5c4872db36e8789e1210 sentinel known-sentinel hps 192.168.136.177 26379 24fcadfb3c841c13f3aa982475d1047f84a04368 sentinel current-epoch 0

redis.conf

[root@redis1 etc]# cat redis.conf | grep ^[^#] bind 192.168.136.175 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize no supervised systemd pidfile /data/hps/redis/data/redis.pid loglevel notice logfile /data/hps/redis/logs/redis.log databases 16 save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /data/hps/redis/data masterauth huchunlei slave-serve-stale-data yes slave-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no slave-priority 100 requirepass huchunlei appendonly no appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes

Comment From: carlsonerik

Amazing solution. Thank you very much. My environment and solution: Master:

####Sentinel Process Configure####

dir "/tmp"
daemonize yes
pidfile "/var/run/redis/redis-sentinel.pid"
logfile "/var/log/redis/redis-sentinel.log"

####Sentinel Host configure####

bind 10.0.0.10
port 26379

####sentinel monitor setup####

sentinel monitor redis-conf 10.0.0.10 6379 2
sentinel failover-timeout redis-conf 10000
sentinel parallel-syncs redis-conf 5

Slaves:

####Sentinel Process Configure####

dir "/tmp"
daemonize yes
pidfile "/var/run/redis/redis-sentinel.pid"
logfile "/var/log/redis/redis-sentinel.log"

####Sentinel Host configure####

bind 10.0.0.x
port 26379

####sentinel monitor setup####

sentinel monitor redis-conf 10.0.0.10 6379 2
sentinel failover-timeout redis-conf 10000
sentinel parallel-syncs redis-conf 5

It would result in below and never recover. +monitor master redis-conf 10.0.0.10 6379 quorum 2 +sdown master redis-conf 10.0.0.10 6379

All I needed to do was change to bind 0.0.0.0 on all sentinel confs and include protected-mode no. It started working next service restart.

Comment From: mcouillard

Setting sentinel bind 0.0.0.0 on master and slaves also helped resolve this for me. Seems like a strange solution.

3.2.10, CentOS 7, no SELinux, local network, no passwords, protected-mode no, had localhost and private IPs set as bind previously. Redis saw my slaves just fine but Sentinels were not communicating. netstat showed appropriate looking connections. But the sentinel log never showed the sentinels as connecting. Until I bound all sentinels to zeros.

Comment From: dseira

Same happened to me with the following versions:

CentOS 7  - redis-sentinel 3.2.10
Ubuntu 16 - redis-sentinel 3.0.6

Even putting all the IPs in the bind option.

As a workaround, bind 0.0.0.0 also worked with both OS.

Comment From: cwhsu1984

bind 0.0.0.0 or other interfaces, otherwise, you may not reach other sentinels. This should really be emphasized in the tutorial!

Comment From: lsadehaan

I had the same problem, but with the bind directive in my sentinel config listing more than one interface (for instance 127.0.0.1 and the actual server ip). It was hard to find this one... even after coming to this issue it took a while to figure out whats the problem. I could redis-cli and ping from the other hosts to the sentinel port. very strange and not documented... frustrating

Comment From: zonArt

Wow. I've just killed ~3 hours trying to understand what's happening. Re-checked everything many times. The ports were opened; sockets were listening on 0.0.0.0; redis-cli to redises on other instances working; but redis-cli to sentinels on other instances showing Error: Connection reset by peer (I was not getting DENIED). After adding to sentinel's config

protected-mode no bind 0.0.0.0

everything works.

This should be documented, I can't find that sentinel even supports protected-mode or bind.

Wow thank you so much, I was struggling on this one for weeks

Comment From: jgwinner

Big thumbs up. BUT:

The Docs REALLY need to be updated. I just lost a day on this trying to setup a simple test HA setup with 2 servers's before moving to production