Affects: 5.2.8
We're trying to use reactive in our microservice, but we're seeing the docker memory go up until the instance is killed. We were able to identify webclient as the culprit. To be more precise, the JVM memory is normal, but when API calls are made the docker memory is not released and the instance is eventually killed without any outOfHeap errors.
I was able to reproduce the issue using a simple project.
@Component
public class Controller {
//This is an endpoint to another simple api
//Naturally, I use my local IP instead of localhost in the container
private static final String ENDPOINT = "http://localhost:9090/";
private WebClient client;
public Controller(WebClient.Builder client) {
super();
this.client = client.build();
}
@Bean
public RouterFunction<ServerResponse> router() {
return RouterFunctions.route(GET("helloworld"), this::handle);
}
Mono<ServerResponse> handle(ServerRequest request) {
Mono<String> helloMono =
client.get().uri(ENDPOINT + "/hello").retrieve().bodyToMono(String.class);
Mono<String> worldMono =
client.get().uri(ENDPOINT + "/world").retrieve().bodyToMono(String.class);
return Mono.zip(helloMono, worldMono, (h, w) -> h + w)
.flatMap(s -> ServerResponse.ok().bodyValue(s));
}
}
Here's the dockerFile.
FROM openjdk:8
ENV SERVICE_NAME reactive-hello-world
ADD target/reactive-hello-world-*.jar $APP_HOME/reactive-hello-world.jar
RUN mkdir /opt/reactor-netty/
EXPOSE 9010
CMD java \
-Dcom.sun.management.jmxremote=true \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.rmi.server.hostname=localhost \
-Dcom.sun.management.jmxremote.port=9010 \
-Dcom.sun.management.jmxremote.rmi.port=9010 \
-Xmx190M \
-jar reactive-hello-world.jar
EXPOSE 8080
Here's some images
As you can see, the heap is fine but the memory hasn't decreased, I've tried a similar code using RestTemplate without issue. Edit: I've also tried the deprecated AsyncRestTemplate and I not seeing a problem with that either.
Edit: I have created the repos for this example. Please check if you can reproduce the issue.
The Hello World Backend The Webclient Hello World(JMX is inside this repo) The RestTemplate Hello World The AsyncRestTemplate Hello World The Exchange Strategy Hello World
Comment From: rstoyanchev
@SentryMan does it make a difference if you run it locally without Docker?
Ideally please provide an actual sample app to run and likewise the JMeter script you use. It would be useful to see what the code using RestTemplate looks like for comparison.
Comment From: mdeinum
@SentryMan does it make a difference if you move the RouterFunction @Bean method to a different class which is annotated with @Configuration or make this class an @Configuration instead of an @Component?
@Bean methods in @Component classes are handled differently than from @Configuration classes.
Comment From: SentryMan
That doesn't seem to make a difference, in the microservice where we found the issue, the structure was organized as you say.
Comment From: SentryMan
@rstoyanchev I have added the repos.
Comment From: rstoyanchev
Thanks for the sample apps. Are you able to reproduce it locally, i.e. without Docker for the frontiend app or is that necessary?
As an aside, the RestTemplate app is not an apples-to-apples comparison for RestTemplate vs WebClient because it's used in blocking fashion, without offloading to a different thread, e.g. via .publishOn(Schedulers.elasticBounded()). It's severely limiting concurrency due to the small number of threads. I'd expect a high number of timed out requests or much higher latency.
Comment From: SentryMan
It seems ok when I run them in local. But our microservice is dockerized so we need to figure out why the memory in the docker is filling. Also, yes I'm aware that reactive has the better concurrency.
Are you able to replicate the issue when you run them in docker?
Comment From: rstoyanchev
For reactive and better concurrency, I was merely pointing out that the RestTemplate example doesn't prove the problem is in the WebClient. It could be for example in the zipping or the handling of the server response. A better way to isolate the WebClient is to replace the WebClient calls with something like this and keep everything else the same:
Mono<String> helloMono =
Mono.delay(Duration.ofMilliseconds(100)).map(aLong -> "Hello";
Mono<String> worldMono =
Mono.delay(Duration.ofMilliseconds(100)).map(aLong -> "World";
Are you able to replicate the issue when you run them in docker?
I have not yet run it but this is what I wanted to know.
What memory do you see after the tests when you run it locally? Also if you run it longer, does it continue to increase or does it stay more or less at that level?
Comment From: SentryMan
Ok, I've tried using that code instead and I'm seeing the memory level out at about 400MB in the docker instead of the uncapped growth I observe with webClient.
As for local I'm seeing the webclient jar hover at 390mb, doesn't seem to increase past that.
Comment From: rstoyanchev
Last question, sorry. If it runs longer does it keep going up? Say it runs 10 minutes instead of 5 or does it double or hover around 480?
Comment From: SentryMan
On local it just hovers around 390. This is with the max heap reduced to 190 btw.
Comment From: SentryMan
@rstoyanchev has any movement happened regarding this issue? I've been testing different configurations trying to understand why the problem is happening. most recently I tried replacing the retrieve() method for exchange(). I still got the same massive memory usage.
Comment From: SentryMan
perhaps the issue is deeper than Webclient, I've tried using an ExchangeFunction, and still saw the docker memory leak.
private ExchangeFunction exchangeFunction;
public Controller() {
super();
this.exchangeFunction = ExchangeFunctions.create(new ReactorClientHttpConnector());
}
Mono<ServerResponse> handle(ServerRequest request) {
ClientRequest request1 =
ClientRequest.create(HttpMethod.GET, URI.create(ENDPOINT + "/hello")).build();
ClientRequest request2 =
ClientRequest.create(HttpMethod.GET, URI.create(ENDPOINT + "/world")).build();
Mono<String> helloMono =
exchangeFunction.exchange(request1).flatMap(r -> r.bodyToMono(String.class));
Mono<String> worldMono =
exchangeFunction.exchange(request2).flatMap(r -> r.bodyToMono(String.class));
return helloMono.zipWith(worldMono, (h, w) -> h + w).flatMap(ServerResponse.ok()::bodyValue);
}
Comment From: SentryMan
I've gone deeper still, I've just tried using netty's HttpClient and still saw the issue. @rstoyanchev does this mean the issue is with netty itself? or have I made a misstep somewhere.
private HttpClient client;
public Controller() {
super();
this.client = HttpClient.create();
}
Mono<ServerResponse> handle(ServerRequest request) {
Flux<String> helloMono =
client.get().uri(ENDPOINT + "/hello").responseContent().asByteArray().map(String::new);
Flux<String> worldMono =
client.get().uri(ENDPOINT + "/world").responseContent().asByteArray().map(String::new);
return ServerResponse.ok().body(helloMono.zipWith(worldMono, (h, w) -> h + w), String.class);
}
Comment From: rstoyanchev
@SentryMan apologies for the delay. I just came across your comment under https://github.com/reactor/reactor-netty/issues/1304 which I suppose is the same report as here?
Comment From: SentryMan
Yeah, I'll close this.