Affects:5.3.x
We have a high-volume, latency-sensitive, customer-facing web application built on Spring MVC. Performance profiling shows that org.springframework.aop.framework.AdvisedSupport.getInterceptorsAndDynamicInterceptionAdvice()
uses 1% of cpu, and is always called by org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor
. Load tests show that forcing the use of CglibAopProxy$DynamicUnadvisedInterceptor
instead reduces latency by 5%. Our application does not require the use of Advice, so the undoubtedly-broken feature is an acceptable tradeoff in our case.
Based on code diving and a StackOverflow question, proxied beans always use DynamicAdvisedInterceptor
even when the application defines no advice. This leaves no supported mechanism for avoiding DynamicAdvicedInterceptor
.
This is a valuable performance optimization for applications that can use it, but clearly many cannot. Implementing the optimization behind a per-application configuration switch should capture the wins when available, and cause no harm when not available.
Comment From: jhoeller
Is getInterceptorsAndDynamicInterceptionAdvice
not hitting the cache in your scenario? Or is it the cache lookup that uses that 1% of CPU time? Also, if you are not relying on advice, why are you using proxies for those beans to begin with?
Comment From: jengebr
Is getInterceptorsAndDynamicInterceptionAdvice not hitting the cache in your scenario? Or is it the cache lookup that uses that 1% of CPU time?
It's the cache lookup. In response to a request, the application sends ~300 background tasks to the thread pool, each of which starts to access proxied beans. This generates a cpu micro-spike that makes the ConcurrentHashMap
lookups hurt more than usual.
Also, if you are not relying on advice, why are you using proxies for those beans to begin with?
Most are request-scoped singletons.
Comment From: jhoeller
Thanks for the insight. I'll try to revisit our interceptor delegation arrangement there ASAP, maybe there is something we can auto-optimize in such a scenario when we know there is no specific advice for the entire object, in particular for scoped proxies.
Comment From: jhoeller
Actually, for scoped proxies, there is always the ScopedObject
introduction interface, leading to at least one interceptor in the advice chain. I suppose you are never calling this hence you would not notice if that introduction interceptor would not apply?
Comment From: jengebr
Right, we do not use that.
Comment From: jhoeller
As a side note: For maximum dispatching performance, it's advisable to use ObjectProvider<MyRequestScopedBean>
with the target bean marked as plain @Scope("request")
or @RequestScope(proxyMode=ScopedProxyMode.NO)
, and then to always do getObject().myMethodCall()
on the provider handle. No proxies involved at all then which is particularly recommendable if no proxy-specific functionality is being used there otherwise.
Generally, CGLIB dispatching can be optimized through setFrozen(true)
on the proxy factory which uses a straight target dispatcher in case of an unadvised method then (enforcing that no advice is being added or removed at runtime later on). However, that flag is a bit hidden, and in case of a scoped proxy it does not actually apply due to the ScopedObject
introduction advice unconditionally being present.
For interface-based JDK proxies, the optimization for an empty chain is minor since it just uses slightly more efficient dispatching then. There is no way to bypass getInterceptorsAndDynamicInterceptionAdvice
on any kind of proxy there, not even with the frozen flag.
So we could specifically handle introduction advice in CglibAopProxy
, using a direct target dispatcher for methods that are not part of an introduction interface; this would work for regular methods on scoped proxies then. However, this would also require setFrozen(true)
- while we can consider setting that flag for scoped proxies by default, we are not going to do that in a maintenance release.
Which leaves us with a situation where getInterceptorsAndDynamicInterceptionAdvice
remains on the hot path for any such proxy invocation for the time being. I wonder whether we can make that call faster, e.g. using some kind of local lookup table instead of the ConcurrentHashMap with its MethodCacheKey.
Comment From: jhoeller
It turns out that a straightforward approach is to share the cached interceptors list for the entire Advised instance if there is no method-specific Advisor (which is the case for scoped proxies). This bypasses the ConcurrentHashMap lookup and replaces it with direct field access for such arrangements, hopefully adequately improving performance for your scenario.
Comment From: jhoeller
A recent improvement in the same area (specific to JDK proxies), also affecting AdvisedSupport cache fields: #30499
Comment From: jengebr
Profiling data shows the lookup is 98% of the difference between DynamicAdvisedInterceptor
and DynamicUnadvisedInterceptor
. I can live with that much savings. :)
Two questions:
1. Is it feasible to backport this to 5.3.x?
2. Your solution solves the performance issue (probably) but is it the right thing to do? Feels like DynamicUnadvisedInterceptor
is the intended behavior and we're masking that error.
Comment From: jhoeller
Technically there is advice here (the ScopedObject
introduction), so it is arguably correct that the unadvised interceptor does not apply (aside from it only applying once the configuration is frozen, not expecting any runtime advice registrations). Semantically the shared interceptor cache approach does not bend those rules, it just identifies method-specific advisors and sets up a corresponding method cache only when actually needed. Also, this change applies to CGLIB as well as JDK proxies, with the latter not having the advised versus unadvised distinction at all.
This is pushed to main
now and will be available in the upcoming 6.1.4 snapshot (if you have a chance to give that a try). The change is entirely local to AdvisedSupport
, so you could even try to patch 5.3.x yourself and give it a try there.
As for an official backport, this might be technically feasible but I'd really like to give this a road test in 6.1.4 first. We do not usually backport that kind of change proactively since we tend to see it as part of our wider performance efforts in 6.1.x.
Comment From: jengebr
Working on it today!
From: Yanming Zhou @.> Sent: Wednesday, January 24, 2024 10:58 PM To: spring-projects/spring-framework @.> Cc: Engebretson, John @.>; Mention @.> Subject: Re: [spring-projects/spring-framework] Improve Spring AOP performance for methods without specific advice (Issue #32104)
@jengebrhttps://github.com/jengebr Could you confirm the improvement works as expected?
— Reply to this email directly, view it on GitHubhttps://github.com/spring-projects/spring-framework/issues/32104#issuecomment-1909356005, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARRS4IKFFW7TODPHE6O2GKTYQHQ5NAVCNFSM6AAAAABCI46XG6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBZGM2TMMBQGU. You are receiving this because you were mentioned.Message ID: @.***>
Comment From: jengebr
I implemented the same change on 5.3.31, ran non-production benchmarks, and confirm the following:
- DynamicAdvisedInterceptor remains in place
- AdvisedSupport.getInterceptorsAndDynamicInterceptionAdvice() is now near-zero cost
- Overall application performance has improved as expected, although I won't quote non-prod numbers.
There is also an unexpected yet consistent reduction in application startup time. About 1.5%.
Comment From: jhoeller
Very nice to hear, thanks for the immediate feedback!
I suppose the startup time reduction comes from the initial introspection of all those methods (which used to populate the method cache) where we only perform that introspection once per class now if there is no method-specific advisor to begin with.
Comment From: jhoeller
Out of curiosity, do you have plans to upgrade to Spring Framework 6.x any time soon? Anything holding you back? Also, which JDK generation are you on?
There have been quite a lot of performance-related enhancements in 6.1.x in particular. We've also seen significantly better performance in JDK 17 and 21.
Comment From: jengebr
Out of curiosity, do you have plans to upgrade to Spring Framework 6.x any time soon? Anything holding you back?
I inquired this morning, and the short answer is organizational inertia. Our in-house framework defines the Spring dependency and upgrading that framework creates breakage risk to many, many applications. Based on your comments, I've started a conversation about whether we do one-off upgrades to performance-critical applications, but even that would take a while.
Also, which JDK generation are you on? There have been quite a lot of performance-related enhancements in 6.1.x in particular. We've also seen significantly better performance in JDK 17 and 21.
We're on JDK17, running on a mix of Intel and ARM cpus.
Thanks for the poke on this!
Comment From: jengebr
Can you confirm the EOL for 5.3 is Dec-31 2024, as listed at https://endoflife.date/spring-framework?
Also, what is the next LTS version?
Comment From: bclozel
@jengebr our official support policies are listed on our website: https://spring.io/projects/spring-framework#support (which I believe the endoflife website copies over). We don't ship LTS versions per se, we just tend to support the last minor version in a generation a bit longer to help with the major upgrade. Our next major version is not announced yet. Once you're on a 6.x version, efforts to upgrade to the latest minor should be minimal.
Comment From: jengebr
The internal 5.3.1 equivalent of this change reached production and had the desired effect - the near-elimination of getInterceptorsAndDynamicInterceptionAdvice
from our profiles. We are planning to contribute a backport, and internal migrations to 6.1+.
Comment From: jhoeller
@jengebr cool, great to hear!
As for 5.3.x, the last 5.3.x release is actually planned for August 2024, along with the last 6.0.x release. With that remaining timeline, I'm not sure adding such a performance improvement to those lines is still feasible, given that there is always a risk for regressions.
Comment From: jengebr
Thank you, that makes sense. We'll focus on the upgrade. :)