org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver#getHandlerMappings code line 157: I think maybe don't need assign this.handlerMappings to handlerMappings,because visibility can be guaranteed with volatile and synchronized。

Comment From: quaff

You are wrong. https://github.com/spring-projects/spring-framework/blob/master/spring-core/src/main/java/org/springframework/util/function/SingletonSupplier.java#L90

Comment From: wind57

@quaff I do not know how showing an example of the same pattern in a different class is proof of anything. I understand your point (probably), but this is not an argument.

@xichaodong this is very complicated, seriously. First of all visibility is guaranteed by the JLS and proper terms need to be used, those are called : happen-before; synchronized and volatile are implementations of that specification. Otherwise, you can start from here : https://stackoverflow.com/questions/47638396/publishing-and-reading-of-non-volatile-field/62845008#62845008 (disclaimer: that is my answer).

If you drop that volatile read that you propose (i.e.: if you remove this line : handlerMappings = this.handlerMappings;), this pattern is now broken, badly. The check for the volatile inside the synchronized block is crucial for double check locking pattern to work, to begin with.

Suppose ThreadA did this: if (handlerMappings == null) { and it saw a null, thus entering this block; before this same ThreadA enters synchronized(this) {, some other thread ThreadB already made handlerMappings != null (it wrote some value there). Now your ThreadA enters the synchronized block , does not do : handlerMappings = this.handlerMappings; (since you removed it), does this check : if (handlerMappings == null) { and since handlerMappings is a local variable that was null before - will enter this block. As such, this is not a singleton anymore. This is now terribly broken.

This PR should be closed, as the code is correct.

Comment From: xichaodong

@quaff @wind57 synchronized will flush the latest value of the shared variable to the main memory before unlock,and clear the value of the shared variable in the working memory before lock. so I think don't need repeat get.By the way, I know double check,but I have never seen repeat load variable in singleton pattern using double check

Comment From: xichaodong

@quaff @wind57 common singleton pattern

 public class Singleton {
    private volatile static Singleton uniqueSingleton;

    private Singleton() {
    }

    public Singleton getInstance() {
        if (null == uniqueSingleton) {
            synchronized (Singleton.class) {
                if (null == uniqueSingleton) {
                    uniqueSingleton = new Singleton();
                }
            }
        }
        return uniqueSingleton;
    }
}

Comment From: wind57

there is no such thing as "main memory" and "working memory". I do not know where you got this terms from - but they are simply plain wrong. "flushing" is again wrong because visibility is guaranteed when subsequent read uses the the same lock. How that is implemented at the CPU level (via draining the StoreBuffer + cache coherence protocol taking care of sync across CPU caches) is irrelevant, as your statements make little sense. What exactly is not clear in the explanation I provided to you?

It's funny how you say that you have never seen a "repeat load", only to provide an example immediately after that does it:

synchronized (Singleton.class) {
                if (null == uniqueSingleton) {  // <-- this IS a volatile load

It should be also easy to prove that once you remove that volatile load things are broken. I don't have time now, but a JCStress test should be able to show that.

Comment From: wind57

you might be stuck in a world of links over the internet where people mention things that they don't really understand and I don't blame you. The online links are full with messy and wrong terminology around volatile and very few people truly honestly understand it (I am not one of the those, but trying my best to get there). If you really want to dive into this : Aleksey Shipilev and his talks and blogs are your main source of truth, IMHO.

Comment From: quaff

@quaff @wind57 common singleton pattern

```java public class Singleton { private volatile static Singleton uniqueSingleton;

private Singleton() {
}

public Singleton getInstance() {
    if (null == uniqueSingleton) {
        synchronized (Singleton.class) {
            if (null == uniqueSingleton) {
                uniqueSingleton = new Singleton();
            }
        }
    }
    return uniqueSingleton;
}

} ```

null == xxx is equal to xxx == null which is more elegant

Comment From: quaff

there is no such thing as "main memory" and "working memory". I do not know where you got this terms from - but they are simply plain wrong. "flushing" is again wrong because visibility is guaranteed when subsequent read uses the the same lock. How that is implemented at the CPU level (via draining the StoreBuffer + cache coherence protocol taking care of sync across CPU caches) is irrelevant, as your statements make little sense. What exactly is not clear in the explanation I provided to you?

It's funny how you say that you have never seen a "repeat load", only to provide an example immediately after that does it:

synchronized (Singleton.class) { if (null == uniqueSingleton) { // <-- this IS a volatile load

It should be also easy to prove that once you remove that volatile load things are broken. I don't have time now, but a JCStress test should be able to show that.

@wind57 Could you explain why we need introduce a local variable here?

Comment From: xichaodong

@wind57 I was reach a dead end before, then i think

Map<String, Object> handlerMappings = this.handlerMappings;
handlerMappings = this.handlerMappings;
if (handlerMappings == null) {

and

if (this.handlerMappings == null) {

is not equal, but actually equal.

Comment From: xichaodong

there is no such thing as "main memory" and "working memory". I do not know where you got this terms from - but they are simply plain wrong. "flushing" is again wrong because visibility is guaranteed when subsequent read uses the the same lock. How that is implemented at the CPU level (via draining the StoreBuffer + cache coherence protocol taking care of sync across CPU caches) is irrelevant, as your statements make little sense. What exactly is not clear in the explanation I provided to you? It's funny how you say that you have never seen a "repeat load", only to provide an example immediately after that does it: synchronized (Singleton.class) { if (null == uniqueSingleton) { // <-- this IS a volatile load

It should be also easy to prove that once you remove that volatile load things are broken. I don't have time now, but a JCStress test should be able to show that.

@wind57 Could you explain why we need introduce a local variable here?

@quaff I have the same question

Comment From: wind57

@quaff that is a hard-core optimization. In the classical double-check locking there are 3 volatile reads, with a local variable - there are 2. Especially since with a local variable there is a single racy read, meaning a single read of a volatile, that is not protected by the synchronized block (as opposed by two in the classic example).

For x86 this does not matter - as no re-orderings are done for volatile reads; on other platforms (like ARM), this matters.

Comment From: xichaodong

@quaff that is a hard-core optimization. In the classical double-check locking there are 3 volatile reads, with a local variable - there are 2. Especially since with a local variable there is a single racy read, meaning a single read of a volatile, that is not protected by the synchronized block (as opposed by two in the classic example).

For x86 this does not matter - as no re-orderings are done for volatile reads; on other platforms (like ARM), this matters.

@wind57 I thought for a long time and still didn't understand.Do you have relevant paper or blog?

Comment From: quaff

@quaff that is a hard-core optimization. In the classical double-check locking there are 3 volatile reads, with a local variable - there are 2. Especially since with a local variable there is a single racy read, meaning a single read of a volatile, that is not protected by the synchronized block (as opposed by two in the classic example).

For x86 this does not matter - as no re-orderings are done for volatile reads; on other platforms (like ARM), this matters.

It's a performance improvement, not safety enhance?

Comment From: zakalibit

@xichaodong this might be a relevant article Volatile and memory barriers

@quaff looks like it, there is no extra safety in using a local variable, unless CollectionUtils.mergePropertiesIntoMap() throws (which is not as per documentation), then it would avoid setting this.handlerMappings in the case of an error. And yes it would optimise a volatile read on return, again not sure how big perf gain that would give, probably depends on how often this code is hit.

I think if you want to optimise something for performance in that code, then you could combine those 2 first trace logs in to one, i.e. remove first one and change second in to: logger.trace("Loaded NamespaceHandler mappings from [" + this.handlerMappingsLocation + "]: " + mappings);