Coming from RestTemplate, it was fairly easy catching all sorts of exceptions related to the connectivity with an external API service by catching RestClientException (e.g., if the service is down, or if you use the wrong port in the URL). Catching these exceptions won't clutter the log file and the errors can be shown in a clean manner.

Unfortunately, it doesn't seem that WebClient handles these exceptions nicely. Below is an example of my code. I am using (on purpose) an incorrect port in the URL of my service to simulate a connection problem. While the exception does print The connection observed an error, there is no way to catch this error nicely without cluttering the logs. On SO, the posts that mention a similar problem disable the Netty logger completely, which seems like overkill.

Is there a way to catch this connection error whilst disabling the stacktrace?

My code:

webclient.post()
    .uri(builder -> builder.path(myExternalApi).build())
    .exchangeToMono(res -> {
        HttpStatus httpStatus = res.statusCode();
        if (httpStatus.isError()) {
            return res.bodyToMono(String.class)
                    .flatMap(body -> {
                        log.error("Error in response: ", body);
                        return Mono.error(new MyCustomException("Error", "error_code_0"));
                    });
        }
        return res.toEntity(String.class);
    })
    .doOnError(ClosedChannelException.class, e -> {
        log.error("The error message is: ", e.getMessage());
        throw new MyCustomException("Error 1", "error_code_1");
    })
    .doOnError(WebClientException.class, e -> {
        log.error("The error message is: ", e.getMessage());
        throw new MyCustomException("Error 2", "error_code_2");
    })
    .block();

Output (e.g., whenever I use the wrong port in my external API):

2022-06-18 22:18:10.759  WARN 43221 --- [r-http-kqueue-1] r.netty.http.client.HttpClientConnect  : 
[error, L:/myip ! R:myexternalservice.com/myipaddress:myport] The connection observed an error

java.nio.channels.ClosedChannelException: null
    at io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:1064)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1405)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
    at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:901)
    at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:813)
    at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
    at io.netty.channel.kqueue.KQueueEventLoop.run(KQueueEventLoop.java:293)
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base/java.lang.Thread.run(Thread.java:834)

The error message is: nested exception is java.nio.channels.ClosedChannelException
MyCustomException("Error 2", "error_code_2"); is thrown

Comment From: SFBorland

@simplyanything were you able to find a resolution/work-around? (other than completely disabling Netty logging).

Comment From: simplyanything

@SFBorland I haven't found any viable solution yet, sadly.

Comment From: rstoyanchev

@violetagg, do you have any suggestions? The log message seems to come from Reactor Netty and is at WARN level, which is pretty high.

As an aside, it is better to use onErrorResume if you aim to re-throw the exception. doOnError is more of a self-contained callback that isn't meant to return or throw.

Comment From: violetagg

@rstoyanchev We've already changed once the log level from ERROR to WARN https://github.com/reactor/reactor-netty/issues/582

Comment From: rstoyanchev

I think WARN is still quite high and likely to be on in production. I would think DEBUG is okay since the error signal continues down the chain and can be handled, i.e. won't go unnoticed.

I'm closing this issue since there isn't anything we can do on our side. Feel free to comment further.