Hi, if i use Spring Boot 2.5.x I get an exception when I use rsc
on an Apple M1 (haven't yet tried it on Intel but I will ) to invoke an Rsocket controller endpoint that i didn't use to get given an application
package com.example.demo;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.messaging.handler.annotation.MessageMapping;
import org.springframework.messaging.rsocket.RSocketRequester;
import org.springframework.stereotype.Controller;
@Controller
@SpringBootApplication
public class DemoApplication {
@MessageMapping("hello")
String hello() {
return "hello, world!";
}
@Bean
ApplicationRunner runner() {
return args -> {
Thread.sleep(2_000);
var requester = RSocketRequester.builder().tcp("localhost", 8181);
var message = requester.route("hello").retrieveMono(String.class).block();
System.out.println("message: " + message);
};
}
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
with a request
rsc tcp://localhost:8181 --route hello
i get the following exception immediately after. NB: i also get my response, so it clearly goes end to end and works. but the exception is jarring. would be nice if the logging swallowed that exception or something.
2021-10-22 14:23:14.332 ERROR 12845 --- [ctor-http-nio-3] reactor.core.publisher.Operators : Operator called default onErrorDropped
reactor.core.Exceptions$ErrorCallbackNotImplemented: java.util.concurrent.CancellationException: Disposed
Caused by: java.util.concurrent.CancellationException: Disposed
at io.rsocket.internal.UnboundedProcessor.dispose(UnboundedProcessor.java:550) ~[rsocket-core-1.1.1.jar:na]
at io.rsocket.transport.netty.TcpDuplexConnection.doOnClose(TcpDuplexConnection.java:67) ~[rsocket-transport-netty-1.1.1.jar:na]
at io.rsocket.internal.BaseDuplexConnection.lambda$new$0(BaseDuplexConnection.java:30) ~[rsocket-core-1.1.1.jar:na]
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.runFinally(FluxDoFinally.java:163) ~[reactor-core-3.4.11.jar:3.4.11]
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:146) ~[reactor-core-3.4.11.jar:3.4.11]
at reactor.core.publisher.SinkEmptyMulticast$VoidInner.complete(SinkEmptyMulticast.java:238) ~[reactor-core-3.4.11.jar:3.4.11]
at reactor.core.publisher.SinkEmptyMulticast.tryEmitEmpty(SinkEmptyMulticast.java:70) ~[reactor-core-3.4.11.jar:3.4.11]
at reactor.core.publisher.SinkEmptySerialized.tryEmitEmpty(SinkEmptySerialized.java:46) ~[reactor-core-3.4.11.jar:3.4.11]
at io.rsocket.internal.BaseDuplexConnection.dispose(BaseDuplexConnection.java:51) ~[rsocket-core-1.1.1.jar:na]
at io.rsocket.transport.netty.TcpDuplexConnection.lambda$new$0(TcpDuplexConnection.java:49) ~[rsocket-transport-netty-1.1.1.jar:na]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1182) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:773) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:749) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:105) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:174) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.69.Final.jar:4.1.69.Final]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
Comment From: joshlong
Ive got a reproduction of this in https://github.com/joshlong/rsocket-sadness-10-22-2021
Comment From: Polve
I have the same the problem on my machine, Spring Boot v2.6.0-RC1 running on JDK 17
Comment From: rstoyanchev
I believe this is the same as https://github.com/rsocket/rsocket-java/issues/1018 and a fix will be provided on that side. Please, also see the discussion there for workarounds.