The purpose - optimize function HttpStatus.resolve. Previous version worked with O(N) complexity, new works with O(1) complexity and doesn't require any allocation.

Comment From: bclozel

I'm not sure this PR is really improving the situation.

A quick micro-benchmark shows that this PR speeds up valueOf(int) by a factor 3-4x. It's a nice improvement, but not big enough to make a real difference in our web benchmarks.

On the other hand, this approach consistently allocates an array of 500+ size, for 60+ elements in it. Also, in my opinion, these changes don't help with readability. There would be other ways to speed up the index search for this, but iterating over enum a few values is such a common case that I don't think it's really worth it.

Overall, I'm not sure we should merge these changes.

Comment From: sbrannen

Overall, I'm not sure we should merge these changes.

I am also not convinced that the proposed change would provide much benefit in the big picture.

Comment From: diguage

How about giving up the array and using Map<Integer, HttpStatus > CODE_TO_STATUS_MAP = new HashMap<>(); to save memory?

Comment From: dreis2211

@bclozel I'm surprised that you see an improvement. On my machine with JDK 11 I see actually the contrary. The proposed solution is slower for the most common case of status code 200.

Benchmark            Mode  Cnt  Score   Error  Units
MyBenchmark.testNew  avgt   10  2,459 ± 0,022  ns/op
MyBenchmark.testOld  avgt   10  3,710 ± 0,018  ns/op

With random values even more so:

Benchmark            Mode  Cnt   Score   Error  Units
MyBenchmark.testNew  avgt   10   8,784 ± 0,621  ns/op
MyBenchmark.testOld  avgt   10  33,358 ± 3,076  ns/op
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@State(Scope.Thread)
public class MyBenchmark {

    @State(Scope.Thread)
    public static class BenchmarkState {
        public int status = 200;
        public Random random = new Random();
    }

    @Benchmark
    public HttpStatus testOld(BenchmarkState state) {
        return HttpStatus.resolveOld(state.random.nextInt(500));
    }

    @Benchmark
    public HttpStatus testNew(BenchmarkState state) {
        return HttpStatus.resolveNew(state.random.nextInt(500));
    }
}

Apart from that I share the opinion from @bclozel that the proposed solution doesn't help with readability.

Comment From: bclozel

@dreis2211 I'm confused - in the case of the average time mode, isn't it the lower the better? Or am I misreading the old vs. new variants?

Anyway I'm closing this PR because it doesn't seem we're achieving much here, and this part of our codebase was never identified as a bottleneck of any kind, so there's little value in working on other possible optimizations.

Comment From: dreis2211

@bclozel No, you're not misreading things, I'm just stupid. I was testing with throughput before on something else.

Comment From: bclozel

@dreis2211 No worries - you were brave enough to publish your benchmark, I'm sure mine are flawed in 10 different ways. My initial attempt with throughput was not really conclusive because error margins were really high - there might have been a lot of noise on my laptop at the time.