I have setup a simple spring boot webflux app with version 2.3.3 with following controller

@RestController
public class Controller {

    @Autowired
    Bean bean;
    @RequestMapping("/{id}")
    public Object test(){
        return bean;
    }
}

and i had setup a nginx in front of it with default configurations and the following proxy pass configuration

server {

    server_name com.test;

    listen 9080;
    location / {
            proxy_pass http://172.31.32.52:3000$uri;
            proxy_set_header Host $host;
    }
}

Now while performing wrk benchmarking I ran the following command

wrk -t2 -c100 -d30s --latency -s post.lua http://171.31.12.124:9080/reqToRespApi

This works fine 1 -2 rounds with following results

Running 30s test @ http://171.31.13.124:9080/reqToRespApi
  2 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    17.86ms    3.06ms  65.83ms   77.39%
    Req/Sec     2.81k   270.18     3.33k    62.00%
  Latency Distribution
     50%   17.34ms
     75%   19.47ms
     90%   21.48ms
     99%   26.47ms
  168028 requests in 30.02s, 28.52MB read
Requests/sec:   5596.97
Transfer/sec:      0.95MB

but later it started giving 502's for few requests with following results

Running 30s test @ http://171.31.12.124:9080/reqToRespApi
  2 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    89.25ms  253.15ms   2.00s    93.53%
    Req/Sec     1.76k     1.06k    3.28k    54.17%
  Latency Distribution
     50%   19.52ms
     75%   34.47ms
     90%   95.15ms
     99%    1.53s 
  104958 requests in 30.03s, 18.22MB read
  Socket errors: connect 0, read 0, write 0, timeout 81
  Non-2xx or 3xx responses: 2509
Requests/sec:   3495.35
Transfer/sec:    621.30KB

After this even normal requests started giving

 curl http://localhost:3000/test
curl: (56) Recv failure: Connection reset by peer

Can I know what configuration needs to be changed ? with normal springboot web app it works fine without any errors.

Tested this on 3 AWS servers with following setup

  1. AWS t2 medium - wrk was running here
  2. AWS t3axlarge - nginx was hosted here
  3. AWS t2 medium - spring boot app was hosted here

P.S Have modified the IP addresses for security reasons.

Comment From: revoorunischal

I have added my code in here https://github.com/revoorunischal/spring-webflux-sample

Comment From: philwebb

There's not a lot of Spring Boot going on once the server is up and running so I suspect that this might be a Spring WebFlux or Reactor Netty issue. I'll transfer this to the Spring Framework issue tracker for now.

Comment From: philwebb

/cc @violetagg in case she's seen anything similar with other Reactor Netty code.

Comment From: violetagg

@revoorunischal If you run the load test targeting directly the spring boot application (removing nginx from the scenario), do you see the same problem?

Comment From: revoorunischal

No ...there is no issue and the RPS goes way beyond 18k

Comment From: rstoyanchev

@revoorunischal are you sure the 502 originates from the WebFlux server or could it be coming from nginx?

Comment From: revoorunischal

This was given by the webflux app when i requested it during the time when i was getting 502 from nginx

 curl http://localhost:3000/test
curl: (56) Recv failure: Connection reset by peer

Comment From: rstoyanchev

I don't think this proves where the problem is given that it works without nginx.

Could you show the output from the curl command with the -v flag and also provide a TCP dump from that interaction.

Comment From: revoorunischal

curl data with -v

curl localhost:3000/test -v
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET /test HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.47.0
> Accept: */*
> 
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

TCP dump

13:09:16.317612 IP localhost.50168 > localhost.3000: Flags [S], seq 2287375284, win 43690, options [mss 65495,sackOK,TS val 4234010 ecr 0,nop,wscale 7], length 0
13:09:16.317616 IP ip-172-31-11-114.ap-southeast-1.compute.internal.ssh > broadband.actcorp.in.50822: Flags [P.], seq 105:165, ack 80, win 275, options [nop,nop,TS val 4234010 ecr 710714462], length 60
13:09:16.317627 IP localhost.3000 > localhost.50168: Flags [S.], seq 762054059, ack 2287375285, win 43690, options [mss 65495,sackOK,TS val 4234010 ecr 4234010,nop,wscale 7], length 0
13:09:16.317653 IP localhost.50168 > localhost.3000: Flags [.], ack 1, win 342, options [nop,nop,TS val 4234010 ecr 4234010], length 0
13:09:16.317704 IP localhost.50168 > localhost.3000: Flags [P.], seq 1:83, ack 1, win 342, options [nop,nop,TS val 4234010 ecr 4234010], length 82
13:09:16.317709 IP localhost.3000 > localhost.50168: Flags [.], ack 83, win 342, options [nop,nop,TS val 4234010 ecr 4234010], length 0
13:09:16.317787 IP ip-172-31-11-114.ap-southeast-1.compute.internal.ssh > broadband.actcorp.in.50822: Flags [P.], seq 165:353, ack 80, win 275, options [nop,nop,TS val 4234010 ecr 710714462], length 188
13:09:16.350756 IP broadband.actcorp.in.50822 > ip-172-31-11-114.ap-southeast-1.compute.internal.ssh: Flags [.], ack 105, win 2047, options [nop,nop,TS val 710714503 ecr 4234008], length 0
13:09:16.358845 IP broadband.actcorp.in.50822 >ip-172-31-11-114.ap-southeast-1.compute.internal.ssh: Flags [.], ack 165, win 2047, options [nop,nop,TS val 710714510 ecr 4234010], length 0
13:09:16.359293 IP broadband.actcorp.in.50822 > ip-172-31-11-114.ap-southeast-1.compute.internal.ssh: Flags [.], ack 353, win 2044, options [nop,nop,TS val 710714510 ecr 4234010], length 0
13:09:16.875693 IP localhost.3000 > localhost.50168: Flags [R.], seq 1, ack 83, win 342, options [nop,nop,TS val 4234149 ecr 4234010], length 0

Comment From: rstoyanchev

Thanks for the info. Yes it looks like the server gets the request and headers and then closes the connection. What we'd really need to know is why that happens and at what level (Netty, Reactor Netty, or WebFlux)?

If you could please try one more time with DEBUG logging for io.netty, reactor.netty, and org.springframework.web and show the logs for that same curl request so we can hopefully see at what level the rejection happens and why? If for some reason the logging interferes, you can also try with 2 of the 3 categories or with just one at a time.

Comment From: revoorunischal

I tried with debug logs on and got this exception

2020-08-27 05:35:41.312 ERROR 2712 --- [or-http-epoll-1] reactor.netty.tcp.TcpServer              : [id: 0x7327392d, L:/127.0.0.1:3000 - R:/127.0.0.1:47216] onUncaughtException(SimpleConnection{channel=[id: 0x7327392d, L:/127.0.0.1:3000 - R:/127.0.0.1:47216]})

java.lang.OutOfMemoryError: Direct buffer memory
    at java.base/java.nio.Bits.reserveMemory(Bits.java:175) ~[na:na]
    at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[na:na]
    at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[na:na]
    at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:755) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:731) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:247) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:215) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:147) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53) ~[netty-transport-native-unix-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-transport-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:777) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at java.base/java.lang.Thread.run(Thread.java:836) ~[na:na]

2020-08-27 05:37:53.133 DEBUG 2712 --- [or-http-epoll-2] o.s.w.s.adapter.HttpWebHandlerAdapter    : [6374bb15-851983] HTTP GET "/test"
2020-08-27 05:37:53.133 DEBUG 2712 --- [or-http-epoll-2] s.w.r.r.m.a.RequestMappingHandlerMapping : [6374bb15-851983] Mapped to com.finacle.testReceiverFlux.Controller#test()
2020-08-27 05:37:53.134 DEBUG 2712 --- [or-http-epoll-2] o.s.w.r.r.m.a.ResponseBodyResultHandler  : Using 'application/json' given [*/*] and supported [application/json, application/*+json, text/event-stream]
2020-08-27 05:37:53.134 DEBUG 2712 --- [or-http-epoll-2] o.s.w.r.r.m.a.ResponseBodyResultHandler  : [6374bb15-851983] 0..1 [com.finacle.testReceiverFlux.Bean]
2020-08-27 05:37:53.135 DEBUG 2712 --- [or-http-epoll-2] o.s.http.codec.json.Jackson2JsonEncoder  : [6374bb15-851983] Encoding [com.finacle.testReceiverFlux.Bean@34ef32d0]
2020-08-27 05:37:53.135 DEBUG 2712 --- [or-http-epoll-2] o.s.w.s.adapter.HttpWebHandlerAdapter    : [6374bb15-851983] Completed 200 OK
2020-08-27 05:37:59.106 ERROR 2712 --- [or-http-epoll-3] reactor.netty.tcp.TcpServer              : [id: 0xa79e4bc5, L:/127.0.0.1:3000 - R:/127.0.0.1:47220] onUncaughtException(SimpleConnection{channel=[id: 0xa79e4bc5, L:/127.0.0.1:3000 - R:/127.0.0.1:47220]})

java.lang.OutOfMemoryError: Direct buffer memory
    at java.base/java.nio.Bits.reserveMemory(Bits.java:175) ~[na:na]
    at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[na:na]
    at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[na:na]
    at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:755) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:731) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:247) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:215) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:147) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53) ~[netty-transport-native-unix-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-transport-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:777) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at java.base/java.lang.Thread.run(Thread.java:836) ~[na:na]

2020-08-27 05:38:13.231 ERROR 2712 --- [or-http-epoll-4] reactor.netty.tcp.TcpServer              : [id: 0xba6600ff, L:/127.0.0.1:3000 - R:/127.0.0.1:47222] onUncaughtException(SimpleConnection{channel=[id: 0xba6600ff, L:/127.0.0.1:3000 - R:/127.0.0.1:47222]})

java.lang.OutOfMemoryError: Direct buffer memory
    at java.base/java.nio.Bits.reserveMemory(Bits.java:175) ~[na:na]
    at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[na:na]
    at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[na:na]
    at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:755) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:731) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:247) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:215) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:147) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53) ~[netty-transport-native-unix-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-transport-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:777) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at java.base/java.lang.Thread.run(Thread.java:836) ~[na:na]

2020-08-27 05:38:14.746 ERROR 2712 --- [or-http-epoll-1] reactor.netty.tcp.TcpServer              : [id: 0xaed213a5, L:/127.0.0.1:3000 - R:/127.0.0.1:47224] onUncaughtException(SimpleConnection{channel=[id: 0xaed213a5, L:/127.0.0.1:3000 - R:/127.0.0.1:47224]})

java.lang.OutOfMemoryError: Direct buffer memory
    at java.base/java.nio.Bits.reserveMemory(Bits.java:175) ~[na:na]
    at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[na:na]
    at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[na:na]
    at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:755) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:731) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:247) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:215) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:147) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) ~[netty-buffer-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53) ~[netty-transport-native-unix-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-transport-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:777) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[netty-transport-native-epoll-4.1.51.Final-linux-x86_64.jar!/:4.1.51.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.51.Final.jar!/:4.1.51.Final]
    at java.base/java.lang.Thread.run(Thread.java:836) ~[na:na]

2020-08-27 05:38:17.436 DEBUG 2712 --- [or-http-epoll-2] o.s.w.s.adapter.HttpWebHandlerAdapter    : [8e5a9472-851984] HTTP GET "/test"
2020-08-27 05:38:17.436 DEBUG 2712 --- [or-http-epoll-2] s.w.r.r.m.a.RequestMappingHandlerMapping : [8e5a9472-851984] Mapped to com.finacle.testReceiverFlux.Controller#test()
2020-08-27 05:38:17.436 DEBUG 2712 --- [or-http-epoll-2] o.s.w.r.r.m.a.ResponseBodyResultHandler  : Using 'application/json' given [*/*] and supported [application/json, application/*+json, text/event-stream]
2020-08-27 05:38:17.436 DEBUG 2712 --- [or-http-epoll-2] o.s.w.r.r.m.a.ResponseBodyResultHandler  : [8e5a9472-851984] 0..1 [com.finacle.testReceiverFlux.Bean]
2020-08-27 05:38:17.436 DEBUG 2712 --- [or-http-epoll-2] o.s.http.codec.json.Jackson2JsonEncoder  : [8e5a9472-851984] Encoding [com.finacle.testReceiverFlux.Bean@34ef32d0]
2020-08-27 05:38:17.436 DEBUG 2712 --- [or-http-epoll-2] o.s.w.s.adapter.HttpWebHandlerAdapter    : [8e5a9472-851984] Completed 200 OK

If you see the logs, It contains logs of almost 6 requests, out of which 2 are successful. But the main issue is memory, Can I know what is taking the memory ? because this is just a dummy server which takes in request and responds back.

Comment From: violetagg

@revoorunischal Are you able to test the latest releases

Comment From: revoorunischal

Hi,

tested this on 2.3.4.RELEASE and its working fine. without any issue.

This can be closed