Affected versions: 5.3.10, 5.3.16.

We believe it's in older versions as well.

We are using a spring-integration:inbound-gateway with the limitation of number concurrent consumers (for example 4). That's causing 4 threads to be created.

Our application uses Artemis as the JMS server, but it will happen with all types of JMS brokers.

When the application is disconnected from the Artemis server (for example, when restarting the Artemis), the living threads remain, and when the connection is restored, a new set of 4 threads is created.

Over time, the application runs out of memory.

By debugging the org.springframework.jms.listener.DefaultMessageListenerContainer, especially the inner AsyncMessageListenerInvoker, when an exception is thrown the living threads are not cleared properly.

Comment From: simonbasle

Hi @alexschwarzman, Can you pinpoint where in the AsyncMessageListenerInvoker the exception bubbles up and is uncaught? Do you think it would be possible to write a (unit or integration) test to reproduce the issue with as few external components as possible?

Could the problem lie in the inbound gateway, or its configuration? This caught my eye in the Spring Integration documentation:

Starting with version 5.1, when the endpoint is stopped while the application remains running, the underlying listener container is shut down, closing its shared connection and consumers. Previously, the connection and consumers remained open. To revert to the previous behavior, set the shutdownContainerOnStop on the JmsInboundGateway to false.

Comment From: spring-projects-issues

If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.

Comment From: spring-projects-issues

Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue.