Right now the EXPIRE command only allows to set an expiration time on a key. It would be cool to have the possibility to set it on a field of an hash object. For example:

HSET key field "Hello"
EXPIRE key field 10

In the example above the EXPIRE command sets an expiration time of 10 seconds to the field hash field, and not to the key object.

Comment From: badboy

That was discussed before: #167

Comment From: scottix

I ran into this issue just today.

Thought I would give a use case scenario.

It comes back to why is there a hash structure.

The main reason I want to use a hash is the HLEN command. Name-spacing keys is not an effective solution to get the length. (Takes too long with current commands) I have 1 key with a fixed size of fields, I need an expire to remove stale fields.

My current solution, I have to do this every time I insert:

// Additionally overhead with no expire Use HGETALL iterate through the fields to check timestamp. HDEL to remove stale fields.

HLEN to the get the size if length < max HSETNX set key field timestamp

// Also the timestamp as value would be unnecessary

Sets and Lists I can understand not implementing because those are more complex.

Comment From: kapcod

You can use regular hash for values and sorted set to handle expirations, so using such hash become something like this:

keys = ZRANGEBYSCORE(zkey, 0, now) if keys.size>0 ZREMRANGEBYSCORE(zkey, 0, now) HDEL(hkey, *keys) end val = HGET(hkey, key)

This way every getting will clean up the hash from expired keys. Alternatively cleaning up could be done every few seconds/minutes by background task

Comment From: YvesChan

creat another sorted set to handle expirations may be a good idea... just considering the overhead

Comment From: optimuspaul

It would be a game changer for me if I could expire specific keys in a hash. I don't have the luxury at this time to do the sorted set thing.

Comment From: subnetmarco

@antirez is there an implementation problem that prevents this feature from being built ?

Comment From: optimuspaul

Why did this get closed?

Comment From: mattsta

Yup, implementation problem.

Small hashes can be stored in ziplists which is just a length-prefixed arrangement of your field-value pairs. So, (abstractly) if you do HSET key field1 val1 what Redis stores is: [6]field1[4]val1. If you add field2 with val2, the value of key becomes [6]field1[4]val1[6]field2[4]val2. There's no way to reference individual hash fields in that situation for expiration.

Larger hashes get converted to actual hash tables, but even then, Redis has no way to address an individual hashtable entry for global expiration behavior.

If you need expiration for "data is invalid after X seconds" reasons, you can store another field in your hash with [fieldname]_expiresAt then always retrieve that with your [fieldname] to check if the data is still valid. Now, that obviously doesn't qualify the data for auto-expiration under memory pressure, but you could also use HSCAN to scan your hashes periodically, read all your _expiresAt fields, then manually delete values outside the expiration time.

Comment From: yangxing5200

Redis redis-64.2.8.17 Execute EXPIRE key field 10 ERR wrong number of argments for 'expire' command @thefosk

Comment From: scottix

Based on what @mattsta said hashes are more of a hack to keys A work around could be the following

// Set Hash Fields SETEX Key1Field1 Time Value SETEX Key1Field2 Time Value

// Return list of keys based on hash ~~KEYS Key1~~ SCAN 0 MATCH Key1

// If you need the values MGET Key1Field1 Key1Field2

That will remove the expire logic from your application and use Redis expire logic

Now if we had a function like MGETALL Key1* which worked like HGETALL we would be golden

Comment From: badboy

Except please do never use KEYS in production, and for the same reason there won't be a MGETALL with patterns

Comment From: kapcod

You can use scan, it doesn't block and is fast enough. On May 29, 2015 2:13 PM, "Jan-Erik Rediger" notifications@github.com wrote:

Except please do never use KEYS in production, and for the same reason there won't be a MGETALL with patterns

— Reply to this email directly or view it on GitHub https://github.com/antirez/redis/issues/1042#issuecomment-106776801.

Comment From: pensierinmusica

+1 Any chance this feature could be reconsidered?

Comment From: danielsan

+1 Here we are, almost 5 years after the first request and Redis still does not support the expiration of individual keys :(

Many people would love to have that feature available in Redis

Comment From: styfle

This looks like a duplicate of #167

The issue might get a little more visibility if everyone clicks the 👍 on the original post.

Comment From: ipapapa

For those who are interested in how to add such a feature on the client, we recently added it on Dynomite's client https://github.com/Netflix/dyno/wiki/Expire-Hash

Comment From: ghost

I still vote for this feature to be implemented. Sure it can be done manually but then what's the point.

Comment From: jpereira

+1

Comment From: Chickyky

+1 for this feature

Comment From: AnupamaMadupuTR

+1 for this feature

Comment From: peachestao

Is there any progress on this?

Comment From: yossigo

Hash field expiration comes up often, however natively supporting it is going to involve a lot of resource overhead and a lot of additional complexity (with any design trading off a bit of one for the other). As such, there needs to be a very strong argument for going in this direction.

We can approach this also in a different way: instead of supporting sub-key expiration, we can try to make it easier to compose multi-key "objects". For example, instead of

HSET user:1234 name "My Name" volatileVal "MyVal"

which lacks the ability to set up expiration for volatileVal, I could use

HSET user:1234 name "MyName"
SET user:1234:volatileVal "MyVal" EX 100

The main argument against this pattern today is that Redis knows nothing about the relationship of keys, so it's all up to the client to make sure that user:1234 gets deleted along with user:1234:volatileVal, etc. This gets worse when we consider eviction, which may evict some keys thereby breaking the integrity of the object.

But I think that the solution to this problem may actually be simpler and more incremental in nature. Basically what it takes is providing Redis with some hints about how keys are related and make up more complex objects. At the minimum, such related objects should be deleted together, atomically, when explicitly deleted or evicted. There could be more to it of course (sch**ma integrity validation anyone?).

I think this approach may have additional benefits beyond solving the field expiration problem. For example, it makes it easier to define and manipulate nested structures, e.g.

HSET user:1234 name "MyName"
SADD user:1234:friendIds 1 2 3 4 5

This is of course just a very rough initial idea, but I'm curious how it fares vs. plain hash field expiation.

Comment From: eyjian

+1 for this feature

Comment From: Chickyky

you guys can use my package support expire field/member in hash/set/sorted-set for work around

https://www.npmjs.com/package/redis-extend

Comment From: chenyang8094

can try this: https://github.com/alibaba/TairHash

Comment From: jinyius

https://docs.keydb.dev/docs/commands/#expiremember

Comment From: AmreshSinha

+1 for this feature

Comment From: seunggabi

+1

Comment From: skumaridas10

Is there any impl for putall and expiry of hashops together in single transaction in reactive redis?

Comment From: mojirml

+1 for this feature

Comment From: seunggabi

solution TTL 2 days

# index = day(0~364) % 3

HSET key_{index=0} field value
HSET key_{index=1} field value
HSET key_{index=2} field value

Comment From: Rush

I created a module to achieve this https://github.com/Rush/redis_expiremember_module

  • Built in Rust
  • Strives for a minimal performance footprint

It's not as good as a native implementation would be, but we use it with success

Comment From: moticless

Heads up, we've started implementing hash field expiration, and it's planned for an upcoming release.

Comment From: subnetmarco

Glad to see this is being worked on!