Repository with application that reproduces the issue: https://github.com/tkaesler/spring-leak-reproducer

Spring Boot Starter Parent Version: 3.1.1

When continuously calling a function that reduces a Flux, after a certain time (3 minutes in case of the reproducer) a leak is detected:

2023-07-20T13:02:13.629+02:00 ERROR 51552 --- [tor-tcp-epoll-1] io.netty.util.ResourceLeakDetector       : LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
#1:
    io.netty.buffer.AbstractPooledDerivedByteBuf.deallocate(AbstractPooledDerivedByteBuf.java:87)
    io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:111)
    io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:101)
    io.netty.buffer.WrappedByteBuf.release(WrappedByteBuf.java:1037)
    io.netty.buffer.SimpleLeakAwareByteBuf.release(SimpleLeakAwareByteBuf.java:102)
    io.netty.buffer.AdvancedLeakAwareByteBuf.release(AdvancedLeakAwareByteBuf.java:942)
    io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:90)
        ...

What I haven't tested properly: * Whether the entity has to come from a database, I couldn't reproduce it without thus far * Whether never versions of reactor fix this issue * Whether it's a problem with reactor itself (seems like it, but my knowledge there is still somewhat limited)

Comment From: wilkinsona

Thanks for the sample. FWIW, it took almost 6 minutes for the problem to occur on my machine (an Intel Mac running macOS 13.4.1 (c)) using Java 17.0.5.

The complete error was the following:

2023-07-21T10:01:23.379+01:00 ERROR 1972 --- [ctor-tcp-nio-14] io.netty.util.ResourceLeakDetector       : LEAK: DataRow.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
Created at:
    io.r2dbc.postgresql.message.backend.DataRow.<init>(DataRow.java:37)
    io.r2dbc.postgresql.message.backend.DataRow.decode(DataRow.java:141)
    io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decodeBody(BackendMessageDecoder.java:65)
    io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decode(BackendMessageDecoder.java:39)
    reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:208)
    reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:224)
    reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:292)
    reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:401)
    reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:411)
    reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:113)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:333)
    io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:454)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
    io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.base/java.lang.Thread.run(Thread.java:833)

I agree that this seems like a Reactor problem, particularly as you have identified that it's caused in some way by the reduce operator. As such, I think that it would be best for the Reactor team to investigate in the first instance. Please open a Reactor issue so that they can do so.

Comment From: tkaesler

Thanks for the info/feedback, for anyone curious, here's the reactor issue: https://github.com/reactor/reactor-core/issues/3541