As seen in FasterXML/jackson-databind#3665, the approach taken in our ConcurrentLruCache
implementation can result in an increase of memory heap consumption because of how the read operations queue is structured.
We've experimented with an alternate solution that "flattens" that queue, trading arrays of AtomicReference
for AtomicReferenceArray
. This results in a slight performance decrease but looks acceptable for our use case. We can also consider decreasing the default size of queues as well. They're currently calculated with "number of CPUs x fixed size" - the use cases present in Spring Framework probably don't need this much memory by default.