We are using spring boot version "2.3.0.M4" and have set the following parameter "server.shutdown.grace-period=30s". However when we do graceful shutdown(SIGTERM) our endpoint in the end publishes an event and the event listener publishes a message to AWS Message Queue; It successfully serves the HttpRequest however we see the following error on the console while the event listener publishes the message:

Exception in thread "SimpleAsyncTaskExecutor-122" java.lang.IllegalStateException: Connection pool shut down
        at org.apache.http.util.Asserts.check(Asserts.java:34)
        at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:269)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
        at com.amazonaws.http.conn.$Proxy147.requestConnection(Unknown Source)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:176)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
        at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1258)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1074)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:745)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
        at com.amazonaws.services.sns.AmazonSNSClient.doInvoke(AmazonSNSClient.java:2488)
        at com.amazonaws.services.sns.AmazonSNSClient.invoke(AmazonSNSClient.java:2457)
        at com.amazonaws.services.sns.AmazonSNSClient.invoke(AmazonSNSClient.java:2446)
        at com.amazonaws.services.sns.AmazonSNSClient.executeListTopics(AmazonSNSClient.java:1686)
        at com.amazonaws.services.sns.AmazonSNSClient.listTopics(AmazonSNSClient.java:1658)
        at org.springframework.cloud.aws.messaging.support.destination.DynamicTopicDestinationResolver.getTopicResourceName(DynamicTopicDestinationResolver.java:87)
        at org.springframework.cloud.aws.messaging.support.destination.DynamicTopicDestinationResolver.resolveDestination(DynamicTopicDestinationResolver.java:75)
        at org.springframework.cloud.aws.messaging.support.destination.DynamicTopicDestinationResolver.resolveDestination(DynamicTopicDestinationResolver.java:36)
        at org.springframework.messaging.core.CachingDestinationResolverProxy.resolveDestination(CachingDestinationResolverProxy.java:92)
        at org.springframework.cloud.aws.messaging.core.support.AbstractMessageChannelMessagingSendingTemplate.resolveMessageChannelByLogicalName(AbstractMessageChannelMessagingSendingTemplate.java:108)
        at org.springframework.cloud.aws.messaging.core.support.AbstractMessageChannelMessagingSendingTemplate.convertAndSend(AbstractMessageChannelMessagingSendingTemplate.java:87)
        at org.springframework.cloud.aws.messaging.core.NotificationMessagingTemplate.sendNotification(NotificationMessagingTemplate.java:79)

Comment From: philwebb

I'm not sure if this is a Spring Boot issue or some other problem. Do you see the same exception without the server.shutdown.grace-period property being set? Do earlier versions of Spring Boot give the same issue?

The exception itself is being thrown from PoolingHttpClientConnectionManager which seems to suggest that the shutdown method has been already called. I'd suggest switching on more debug logging and looking for the "Connection manager is shutting down" message to see if that helps pinpoint the problem.

Comment From: sarahpsequeira

We have not tried this with different version of spring boot. We will also try adding more debug logs. Our assumption is spring boot does not take into account any async operations started by the request. After setting "server.shutdown.grace-period" to 60 seconds and starting our service , we do a rest call to one of our endpoints and while this is going on we send a SIGTERM signal. As soon as it receives the SIGTERM signal the service does not accept any new request and continues to serve the existing request it received before SIGTERM. The rest endpoint starts an async operation which starts as soon as the request is served. However even though the async operation is not completed and it has not reached 60 seconds, spring boot starts shutting down the application and closing the connections. Shouldn't spring boot internally also wait for the async operations to be completed in that 60seconds and then only start shutting down context, connections pools etc? Since async task keeps running and connection pool is closed, we run into the above problem

Comment From: philwebb

Shouldn't spring boot internally also wait for the async operations to be completed in that 60seconds and then only start shutting down context, connections pools etc?

That's an interesting problem, we'd have to have a look and see if the web servers that we support have the hook points we need.

Comment From: wilkinsona

We wait, across all three servlet containers, for requests that have used the Servlet API's async support to complete. This is separate to a general async operation (application code dispatching something to a thread pool for example) which is what I believe is being reported here.

When there are no longer any in-flight requests, shutdown processing will proceed by closing the application context. This will trigger bean destruction and it's at this point that something needs to wait to prevent the connection pool being used by the AWS message queue integration being closed prematurely.

I don't think it's possible for Spring Framework or Boot to understand the interaction between all of the components in the application to handle this automatically. I can't say much more than that without knowing more about the beans that are involved and why, apparently, the connection pool being used by the AWS message queue integration has been closed before something that depends on that integration has completed its work.

Comment From: philwebb

I misunderstood the issue and assumed that we didn't support the Servlet async API. Since we already support that, I'm not sure what else we can do.

@sarahpsequeira I think we're going to need a sample application that shows the problem before we can process this one any further.

Comment From: sarahpsequeira

Thank you for looking into this. we will be providing with a sample application very soon.

Comment From: spring-projects-issues

If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.

Comment From: sarahpsequeira

Please find attached the link to the sample application with the steps to reproduce in the readme https://github.com/sarahpsequeira/GracefulShutdownSample. Please let us know if you need any more information

Comment From: wilkinsona

Thanks for the sample. There's a race condition between the context being closed and the notification being sent. This race condition occurs because you have, via your AsyncConfigurerSupport implementation, configured @Async to use an Executor that isn't a bean and, therefore, isn't considered during context close processing. It also hasn't been configured to wait for tasks to complete when it's being shutdown.

You can correct the problems above by modifying DemoApplication to look like the following:

@EnableJpaRepositories("com.sample.application.demo.repository")
@EntityScan("com.sample.application.demo.model")
@SpringBootApplication
@EnableAsync
public class DemoApplication extends AsyncConfigurerSupport {

    private final ThreadPoolTaskExecutor executor;

    DemoApplication(@Value("${spring.threads.corePoolSize}") int corePoolSize,
            @Value("${spring.threads.maxPoolSize}") int maxPoolSize,
            @Value("${spring.threads.queueCapacity}") int queueCapacity) {
        executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(corePoolSize);
        executor.setMaxPoolSize(maxPoolSize);
        executor.setQueueCapacity(queueCapacity);
        executor.setThreadNamePrefix("Graceful-Startup-");
        executor.initialize();
        executor.setWaitForTasksToCompleteOnShutdown(true);
        executor.setAwaitTerminationSeconds(30);
    }

    @Override
    public Executor getAsyncExecutor() {
        return this.executor;
    }

    @Bean
    public ThreadPoolTaskExecutor asyncExector() {
        return this.executor;
    }

    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);
    }

}

With these changes in place the ThreadPoolTaskExecutor will be shut down before the AmazonSNSClient is shutdown. When the executor is shut down, it will wait for any tasks to have completed before allowing close processing to continue. This will allow your @Async UserAccessedEventListener to send its notification before the AmazonSNSClient that it uses is shut down.

Comment From: sarahpsequeira

Thank you for explaining the issue.. we did try this and it works..

Comment From: wilkinsona

@sarahpsequeira That's good to hear. Thanks for letting us know.