We got the following error using the WebClient in a Spring Boot 2.4.1 application. It appears to be missing some information and it's hard to tell what's going on. It only happened once in production and we haven't seen it in our tests.

message:

Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.web.reactive.function.client.WebClientRequestException: Error while acquiring from reactor.netty.internal.shaded.reactor.pool.SimpleDequePool@4e699e; nested exception is java.io.IOException: Error while acquiring from reactor.netty.internal.shaded.reactor.pool.SimpleDequePool@4e699e] with root cause

stacktrace:

java.io.IOException: Error while acquiring from reactor.netty.internal.shaded.reactor.pool.SimpleDequePool@4e699e
    at reactor.netty.resources.DefaultPooledConnectionProvider$DisposableAcquire.run(DefaultPooledConnectionProvider.java:239)
    at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
    at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
    at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base/java.lang.Thread.run(Thread.java:832)

logger name: org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet] thread name: http-nio-8085-exec-148

It seemed to come as a result of this code:

        Mono<Object> mono = webClient.get()
            .uri(uriBuilder -> {
                uriBuilder.scheme(coreApiBaseUri.getScheme())
                    .host(coreApiBaseUri.getHost())
                    .port(coreApiBaseUri.getPort())
                    .pathSegment(versionPath, "projects", "{id}", "series");

                return uriBuilder.build(projectId);
            })
            .headers(httpHeaders -> httpHeaders.addAll(incomingHeaders))
            .exchangeToMono(clientResponse -> {
                HttpStatus httpStatus = clientResponse.statusCode();
                if (httpStatus.isError()) {
                    if (httpStatus != HttpStatus.BAD_REQUEST) {
                        log.warn("Received status {} from core-api while getting series", httpStatus);
                    }
                    return clientResponse.bodyToMono(ErrorDTO.class);
                }
                return clientResponse.bodyToMono(CoreApiSeriesDTO.class);
            })
            .timeout(Duration.ofMillis(1000))
            .retryWhen(retryOnConnectionErrorsOr5xx("Retrying getting series for project {}, attempt {}", projectId));

        Object dto = mono.block();
        if (dto == null) {
            throw new InternalException(String.format("No/invalid result for project id %s", projectId));
        }

with helper code:

    private Retry retryOnConnectionErrorsOr5xx(String logString, UUID logStringParam1) {
        return Retry
            .fixedDelay(3, Duration.ofSeconds(1L))
            .filter(exception -> exception instanceof IOException
                || exception instanceof TimeoutException
                || exception instanceof WebClientResponseException
                && ((WebClientResponseException) exception).getStatusCode().is5xxServerError())
            .doBeforeRetry(retrySignal -> {
                log.info(logString, logStringParam1, retrySignal.totalRetries());
                log.debug("Exception in retry: ", retrySignal.failure());
            });
    }

Comment From: violetagg

@Bas83 When acquiring a connection from the connection pool, WebClient(Reactor Netty) performs several checks and one of them is the state of the connection. If the connection is not closed then WebClient(Reactor Netty) proceeds with the next steps. However as the close event can be received AFTER the acquisition, WebClient(Reactor Netty) checks the state of the connection again just before sending the request. If the connection is closed then WebClient(Reactor Netty) will retry to acquire again. This retry is performed just once, if this attempt is not successful, the exception above will be returned.

https://github.com/reactor/reactor-netty/blob/d21d5146258ae5966deb16b9ecbf19d2920153eb/reactor-netty-core/src/main/java/reactor/netty/resources/DefaultPooledConnectionProvider.java#L222-L241

You can choose to do one of the following: - Retry when receive such exception - Or change the leasing strategy and apply idle timeout. By default there is no idle timeout and the leasing strategy is FIFO. If you apply an idle timeout and change the leasing strategy to LIFO. This means that you will use always the most recently used connection. In addition to that you may also enable a background eviction that will take care to clean up the connection pool from the connections that reached their idle timeout.

Here you can find how to change those settings: https://projectreactor.io/docs/netty/release/reference/index.html#_connection_pool And more about various timeouts: https://projectreactor.io/docs/netty/release/reference/index.html#connection-pool-timeout

Comment From: Bas83

Thanks for your reply. It seems like this is a fairly low-level implementation detail in a situation we don't really have under control. Are there any plans to handle this automatically? For now I guess we can just add WebClientRequestException to our retrying part.

Comment From: violetagg

@Bas83 you can configure the connection pool as I described above.

Comment From: snicoll

Closing due to the lack of requested feedback.