Using Redis 2.4.x.

Redis is used frequently as a key/value cache store. That being said, a common pattern to segment your keys is to use a format like

<prefix1>:<prefix2>:<unique_id>

In my case, I use it for a session store backend as well as replacement for memcached so I use something like: session:<unique_id> and cachedstuff:<unique_id>

When I push a releases to production, I don't want to blow away sessions, but I frequently need to clear out old cachedstuff:*

Since I have many millions of rows, so doing it like:

 redis-cli keys cachedstuff:* |  xargs redis-cli DEL

Seems impractical.

The only option currently I see is to use a crazy eval:

   EVAL "local keys = redis.call('keys', ARGV[1]) \n for i=1,#keys,5000 do \n redis.call('del', unpack(keys, i, math.min(i+4999, #keys))) \n end \n return keys" 0 cachedstuff:*

It seems like this would be a common enough use case to warrant:

 DEL cachedstuff:*

Is this in the works already or a frequent request? If not, please pass along it as a suggested feature.

Comment From: badboy

This won't happen. KEYS * is already one of the most expensive commands in Redis and should never be used in production ever. Due to the way keys are saved internally (a hash table) it is simply impossible to solve this in a way that doesn't include scanning all keys. See #1108 for a similar request.

If you really need to search for keys by pattern consider using SCAN to iterate the keyspace.

Comment From: raymondjplante

@badboy I see, thanks for the quick feedback.

Do you think using something like:

  SCAN 0 MATCH cached:stuff* COUNT 1000

Inside the eval statement with a loop the best option at this time?

Comment From: itapita

You can't run scan in a script since it isn't deterministic. On Oct 2, 2014 7:46 PM, "Raymond Plante" notifications@github.com wrote:

@badboy https://github.com/badboy I see, thanks for the quick feedback.

Do you think using something like:

SCAN 0 MATCH cached:stuff* COUNT 1000

Inside the eval statement with a loop the best option at this time?

— Reply to this email directly or view it on GitHub https://github.com/antirez/redis/issues/2042#issuecomment-57682046.

Comment From: badboy

The eval statement still calls KEYS.

Comment From: kurttheviking

@raymondjplante we wrestled with this issue for awhile and the most effective solution we came to was 1. separate the session store from the cache (e.g. db0 vs db1) and 2. use SCAN+DEL against the cache to purge data or, when needed, a plain FLUSHDB

Comment From: rupatel

I am entangle with similar issue In cluster mode I want to group keys and store it in the same hash slot. So I stored the keys in the following way: Set {user}1 Set {user}2 Set {user}3 Set{issue}1 Set{issue}2 Set{issue}3 So redis cluster will apply hash to only stuff between braces{stuff} so all users will be stored in same hash slot and all issues will be stored in same hash slot. Now I want to query entire keyspace for pattern {user}* and then initiate delete on those keys.

please help me with which command I shall use. I want above usecase for cluster mode.

Comment From: badboy

In Redis your best option is to use a key pattern which you can precompute. If you absolutely must search keys by pattern use SCAN as suggested above.

Comment From: rupatel

Will scan command (with match hint) return keys from all the nodes which make up the cluster or for just the specific instance. Is scan command supporting cluster. Because iterating all nodes is not a good practice.

Comment From: badboy

No, SCAN will not query all nodes, only the current one. Right now, no command will let Redis query other cluster nodes. Yes, SCAN will work in cluster mode, but as stated only on the node you contact In case "iterating all nodes" is not a working solution for you, you either have to rethink your data design to circumvent this or Redis might just not be the right tool.

Comment From: rupatel

I understand that communication with other nodes is not a good idea. But according to red is cluster spec when I store the key with {stuff},hash will be calculated for just the "stuff". So all keys with {stuff}* pattern will reside in the same node(same hash slot). Now I am using jedislib,how will I know which node to contact. I.e stuff will be mapped to which hash slot and that hash slot in turn will map to which node. Any suggestions from your side will be helpful.

Comment From: badboy

Yes, { } is used for hash tags, so you can decide that certain keys reside on the same hash slot. You have multiple possibilities for the fetching: - You have an cluster-aware client library, that can calculate the hash slot without contacting Redis and has the hash slot assignments already cached (because it contacted atleast one cluster node before) - You have an cluster-aware client library, that will send the command to an arbitrary node and can interpret the return value if it is "ASK ..." or "MOVE ..." - You do the above things by yourself.

Comment From: rupatel

This was helpful,but can you please suggest some cluster aware java client lib which does this.

Comment From: badboy

Looks like Jedis has this implemented already.

Comment From: HeartSaVioR

@rupatel @badboy By Redis Cluster environment, node/slot cache cannot 100% sure it has reflected current Redis Cluster state. (That's why ASK / MOVE exists) Scan doesn't have key information, so scan command cannot route / tell which instance handles slot we want to scan. (It means that ASK / MOVE doesn't occurred) So currently you have to handle it by your own.

You can find related source code about which instance serves specific slot from JedisClusterInfoCache.discoverClusterSlots(). It contains update of cluster node/slot cache, so you need to get rid of it.

JedisClusterCRC16.getSlot() will tell which slot serves the key.

ps. I've already talked this earlier. https://github.com/xetorthio/jedis/pull/687#issuecomment-57728594 Did you ever read it?

Comment From: HeartSaVioR

Actually JedisCluster can handle it (though currently not supported), by updating slot cache immediately, and return a connection which connects to Redis instance that serves slot.

But I still think it's better for Redis Cluster to have "scan in slot" feature.

Comment From: raymondjplante

@kurttheviking The 2 db model makes sense, but I've read in many places that it's discourage as multi db support is being deprecated and not supported in redis cluster. Hence, why I made the nice key prefixes vs. multiple databases. I would like to also avoid entirely multiple redis instances for example having one db on two separate redis servers and switching between them--one for session one for other data.

Comment From: yehosef

Why can't this be implemented internally using the equivalent of a scan+del/unlink. I don't care that it's deterministic because I'm going to continue running until all the keys are deleted - I don't care what order, because I'm going to delete them all.

I think the 2 db option should not be forced on the users. There also may be use cases when keys accidently get added into a database that they shouldn't and need to be deleted anyway.

Comment From: qm3ster

It has been deprecated because it is, in general, better to launch multiple Redis servers on the same machine rather than using multiple databases. Redis is single threaded. Thus, a single Redis server with multiple databases only uses one CPU core. On the other hand, if multiple Redis servers are used, it is possible to take advantage of multiple CPU cores.

from here

It's not really "deprecated" in the sense that you should dump everything in one db. It's deprecated in the sense that the supported and recommended way of having separate dbs on the same machine/pool of machines is to run separate processes. Which makes perfect sense especially in the cluster case, as the performance requirements of the two dbs might differ wildly, so why should they be spread on the same size cluster?

Comment From: ethanresnick

Ref https://github.com/antirez/redis/pull/717

Comment From: enjoy-binbin

as described in https://github.com/redis/redis/issues/2042#issuecomment-57679197, i don’t think we will add it, i am closing it