Affects: \5.3.12
Hello @poutsma,
The problem :
when generating the FilePart#content()
, the PartGenerator
uses the following code
private Flux<DataBuffer> partContent() {
return DataBufferUtils
.readByteChannel(
() -> Files.newByteChannel(this.file, StandardOpenOption.READ),
DefaultDataBufferFactory.sharedInstance, 1024)
.subscribeOn(PartGenerator.this.blockingOperationScheduler);
}
the generated Flux<DataBuffer>
contains data buffers with a capacity of 1024 bytes. This becomes a performance issue when dealing with big files because the call to FilePart#transferTo
take a significant amout of time (too many databuffers).
For example, the call to FilePart#transferTo
for a 90 MB file takes 6.6 seconds with a capacity 1024 bytes. I used ByteBuddy to intercept the call to DataBufferUtils#readByteChannel
and set the capacity to 32 KB and the operation took 312 milliseconds.
What I propose as an enhancement is to make it possible to configure the bufferSize either (by using properties or ServerCodecConfigurer
) or at least use the server.netty.max-chunk-size as a basis. Other suggestions are also welcome.
Thank you and keep up the awesome work!
PS : I think the provided code has been moved to DefaultParts
in newer versions of the Framework
Comment From: poutsma
As of Spring Framework 5.3.13, we use Files.copy
in a bounded elastic scheduler, see here.
Can you check whether you are facing the same performance issue when using the most recent version of Spring Framework?
Comment From: saad14092
I've tested using Spring Framework 5.3.13 and the issue is no longer relevant.
Thanks again for your reply !
Comment From: keyzj
Hello!
I'm currently on 5.3.17 Spring version and experiencing the same performance issue: on the uploading of the large file (2.3 gb) it's parted into buffers of 1024 size, so it takes ages to upload it.