One Strange thing I have observed . I am using the jedis client to connect to redis instance . With the following settings , appendfsync = always and appendonly = true , I would expect a write to the redis using Jedis/redis-cli should throw an exception/error in the case fsync fails . To reproduce the scenario , I have deleted the appendonly.aof after redis got up , and still using both redis-cli and jedis i am still able to write data into the redis . If in a production scenario such a thing happens then the client would be living in illusion that the data is getting synced to the file whereas the aof file might have got deleted .
Comment From: badboy
Seems like normal behaviour to me. Redis keeps the AOF file open and as long as it has it opened the data is still there. Deleting the file path has no effect on it immediately. You can still recover the file (on Linux systems) by copying /proc/$PID_OF_REDIS/fd/$FD_OF_AOF (it is a link, so you should be able to see which one is which).
Reopening the file before every write will be too costly.
Comment From: ashish2108arora
@badboy Deleting the file path has no effect on it immediately . Lets say the aof file got deleted and after that a couple of writes were done {(k1,v1) , (k2,v2)} . These won't throw any exception/error as discussed earlier but after a redis restart these keys will give nil value . Thus , an application which is a client of the redis service will be having nullpointer exception and will crash . So in totality there is a perceived (by the client) data loss even with appnedonly yes and appendfsync always .
Comment From: badboy
- If your client experiences a nullpointer exception, because it can't fetch a value, fix the client application. It should handle missing data gracefully.
- There are a dozen other possibilities for data loss. I'd say a single file got deleted is the least of your concerns (disks crash all the time)
- Generally you should have a way to get back to your data in case of data loss (backups, second persistent store in another database, offsite ...)
- A restart is either planned (in which case you should ensure you have data backups) or unplanned (because of bugs, crashes, ... for which you should plan anyway)
In the end it is a trade-off. Re-opening the file is expensive (and still won't give you back the deleted data) whereas keeping the file open keeps all data available as long as Redis is running (and it is faster).
Comment From: lake-z
There is no activity here for the last 2 years, let's close this issue for now and re-open if needed.
Comment From: yoav-steinberg
Deleting the file from under redis is like fiddling with the configuration (for example disabling persistence altogether). So I won't consider this an issue. Anyway old -> closing.