Hi Spring team!

I have a requirement on my project for cache to be replaced/refreshed (entire) by some event (Kafka for example) and population of it is at once by RestAPI call. I prefer to work with cache as a concept of cache, because it looks like a cache, it behaves like a cache and it is a cache. BUT I can not do this with current interface of org.springframework.cache.Cache and I had to replace on custom implementation. So I was curious are any reasons why it should not to be added or may be it's a legacy and we stuck here?

Best regards, Andrew

Comment From: sbrannen

You can invoke org.springframework.cache.Cache#invalidate() and then repopulate the cache.

Have you tried that?

Comment From: CyberpunkPerson

You can invoke org.springframework.cache.Cache#invalidate() and then repopulate the cache.

As I mention above I need to invalidate and populate entire cache at the same time, otherwise if a key would be requested during refresh (invalidation + population) value could be null -> IllegalStateException. I've achieved atomicity by link replacement of HashMap (cache is only for reading) when new cache is ready.

It works fine, but it's curious for me why org.springframework.cache.Cache does not have such method 🤔

Comment From: snicoll

As I mention above

Sorry, that wasn't clear to me either you meant emptying the cache and loading it again. I am not sure what you meant by the IllegalStateException? There's not atomic guarantee at the higher abstraction level. Can you please share an example that showcase the use case you've described (in particular showing the IllegalStateException).

Comment From: CyberpunkPerson

Can you please share an example that showcase the use case you've described (in particular showing the IllegalStateException

Ok. Let's imagine that me have three services admin, view and black box

  • black box - calculate some entities
  • admin - administrate metadata of entities
  • view - return full entitles data to the clients

In my design view has a cache of metadata (about 200 items, could be updated or extended) then when client sends a request for his set of entities they are enriches with metadata. Metadata cache refreshes by Kafka event.

So in case of using org.springframework.cache.Cache#invalidate() + population (..forEach(Cache#put())) one by one, if client sends request during cache refreshment it could be null value in the cache (it has not been populated yet) which means that in terms of service we have IllegalStateException (entities + broken metadata).

I need to achieve atomic full cache replacement. Right now it's done by by link replacement of HashMap.

Metadata cache availability is a mandatory condition of the view service org.springframework.boot.actuate.health.Health#up() so it's always should be there and consistent.

Hope it makes picture more clear. My questions is still the same - Are any reasons why it should not to be added to org.springframework.cache.Cache?

Comment From: snicoll

Are any reasons why it should not to be added to org.springframework.cache.Cache?

Yes, a cache is not meant to be used this way. A cache is an opt-in feature whose purpose is to improve the performance of an application, it shouldn't be mandatory for the application to work. So, if some entries aren't in the cache, whatever is responsible to compute them should compute them.

It looks like you're trying to use a cache as some sort of transactional store and the cache abstraction is not designed this way. In light of that, I am going to close this, thanks for the suggestion in any case.

Comment From: CyberpunkPerson

Alright 🤔 interesting point, thanks!