ConcurrentLruCache can be improved.
- manage collection size manually (size operation of concurrent collections can be heavy)
- check cache hit first (size check will be useless if cache miss)
- reduce read-lock scope
- use
map.getto test cache instead ofqueue.remove
And, I think that using ConcurrentLrCache spring widely instead of some synchronized + LinkedHashMap will improve performance.
Comment From: bananayong
I did try more improvement. If anyone is interested, ping me please. I will create a commit.
class ImprovedConcurrentLruCache<K, V> {
private final int maxSize;
private final ConcurrentHashMap<K, LruEntry<K, V>> cache = new ConcurrentHashMap<>();
private final Function<K, V> generator;
public ImprovedConcurrentLruCache2(int maxSize, Function<K, V> generator) {
this.maxSize = maxSize;
this.generator = generator;
}
public V get(K key) {
V cached = getValue(key);
if (cached != null) {
return cached;
}
synchronized (this.cache) {
cached = getValue(key);
if (cached != null) {
return cached;
}
if (this.cache.size() == this.maxSize) {
Collection<LruEntry<K, V>> lruEntries = this.cache.values();
LruEntry<K, V> eldestEntry = Collections.min(lruEntries, comparingLong(e -> e.lastAccess));
this.cache.remove(eldestEntry.key);
}
V value = this.generator.apply(key);
this.cache.put(key, new LruEntry<>(key, value));
return value;
}
}
private V getValue(K key) {
LruEntry<K, V> cached = this.cache.get(key);
if (cached == null) {
return null;
}
return cached.getValue(this.cache.size() < this.maxSize / 2);
}
private static class LruEntry<K, V> {
private final K key;
private final V value;
private volatile long lastAccess;
LruEntry(K key, V value) {
this.key = key;
this.value = value;
this.lastAccess = System.nanoTime();
}
V getValue(boolean skipAccess) {
if (skipAccess) {
return this.value;
}
this.lastAccess = System.nanoTime();
return this.value;
}
}
}
Any feedback is welcome. Thank you.