Affects: 5.2.10+, 5.3.0+
Ran into the following issue while using AbstractJackson2Encoder
and the jackson csv serializer:
java.lang.NullPointerException: null
at com.fasterxml.jackson.core.util.ByteArrayBuilder.write(ByteArrayBuilder.java:245) ~[jackson-core-2.13.5.jar:2.13.5]
at com.fasterxml.jackson.dataformat.csv.impl.UTF8Writer.close(UTF8Writer.java:62) ~[jackson-dataformat-csv-2.13.5.jar:2.13.5]
at com.fasterxml.jackson.dataformat.csv.impl.CsvEncoder.close(CsvEncoder.java:994) ~[jackson-dataformat-csv-2.13.5.jar:2.13.5]
at com.fasterxml.jackson.dataformat.csv.CsvGenerator.close(CsvGenerator.java:503) ~[jackson-dataformat-csv-2.13.5.jar:2.13.5]
at org.springframework.http.codec.json.AbstractJackson2Encoder.lambda$encode$2(AbstractJackson2Encoder.java:173) ~[spring-web-5.3.27.jar:5.3.27]
The problem is the optimization in commit 7bee3d157407c2408653e1c40608c0244be6be1d after calling byteBuilder.release()
it should not be used, but if generator
has its own buffers then generator.close()
can trigger a flush to the byteBuilder
.
It should be fixable by reordering the operations so byteBuilder
is released last.
This probably also means the generator needs to be flushed and checked whether there is a trailing DataBuffer
that still needs to be sent before terminating the stream. (concatWith(if generator.flush() != [], DataBuffer)
)
Comment From: bclozel
Thanks for the analysis, but you're way ahead of us and this description doesn't help us to understand the problem nor if the proposed solution would break other use cases.
Can you describe what's required to reproduce this problem or share a minimal sample application that throws this error? We've had this optimization in place since 5.2.x and we would like to ensure that any change there doesn't cause regressions.
Comment From: Kiskae
Can you describe what's required to reproduce this problem or share a minimal sample application that throws this error? We've had this optimization in place since 5.2.x and we would like to ensure that any change there doesn't cause regressions.
The fundamental issue is with JsonGenerator
implementations that have internal buffers, like CsvGenerator
and the streaming
jackson encoder.
When the CsvGenerator
is created it writes out the csv header to its internal buffer but does not flush it to byteBuilder
until an actual value is serialized. This means that if you try to serialize an empty Flux
then it goes into the doAfterTerminate
block with data still in the internal buffer.
It then tries to flush this buffer during close
, but since the buffer has already been released it results in the nullpointer exception.
Basically there are 2 separate problems:
1. Resources need to be released in reverse order of creation, in the current code generator
can hold a reference to byteBuilder
which can be used during generator.close()
when byteBuilder.release()
has already been called. This is what causes the NPE and the current code only works because nobody happened to have data in their internal buffer during close()
.
2. If the generator writes data before or after serializing the values (like the header in CSV) then it currently discard that data without writing it. This is because data is only written during value serialization and not when there are no values (discarding the initial data) or if there is data written after the values (like writing a trailer in close
).
- Essentially the encoder assumes there is only data to write after SequenceWriter.flush()
, but data can also be written after JsonGenerator.close()
Comment From: Kiskae
Minimal repro based on CsvMapper
:
public class JacksonCsvEncoder extends AbstractJackson2Encoder {
public static final MediaType TEXT_CSV = new MediaType("text", "csv");
public JacksonCsvEncoder() {
this(CsvMapper.builder().build(), TEXT_CSV);
}
@Override
protected byte[] getStreamingMediaTypeSeparator(MimeType mimeType) {
// CsvMapper emits newlines
return new byte[0];
}
public JacksonCsvEncoder(ObjectMapper mapper, MimeType... mimeTypes) {
super(mapper, mimeTypes);
Assert.isInstanceOf(CsvMapper.class, mapper);
setStreamingMediaTypes(List.of(TEXT_CSV));
}
@Override
protected ObjectWriter customizeWriter(ObjectWriter writer, MimeType mimeType, ResolvableType elementType, Map<String, Object> hints) {
var mapper = (CsvMapper) getObjectMapper();
return writer.with(mapper.schemaFor(elementType.toClass()).withHeader());
}
}
If you try to use this encoder to encode a Flux<Pojo>
it should trigger the NPE above, but it probably doesn't work for more complex nested objects or objects with generics.
Personally I've made the following change to fix the problem:
var emitTrailers = Mono.defer(() -> {
try {
// allow generator to emit trailers
generator.close();
byte[] buffer = byteBuilder.toByteArray();
if (buffer.length != 0) {
DataBuffer dataBuffer = bufferFactory.wrap(buffer);
Hints.touchDataBuffer(dataBuffer, hints, logger);
return Mono.just(dataBuffer);
}
} catch (IOException ex) {
logger.error("Could not flush generator", ex);
}
return Mono.empty();
});
return Flux.from(inputStream)
.map(value -> encodeStreamingValue(value, bufferFactory, hints, sequenceWriter, byteBuilder,
separator))
.concatWith(emitTrailers.flux())
.doAfterTerminate(() -> {
try {
// emitTrailers doesn't get executed on exception
if (!generator.isClosed()) {
generator.close();
}
byteBuilder.release();
}
catch (IOException ex) {
logger.error("Could not close Encoder resources", ex);
}
});
Comment From: bclozel
Sorry for the delayed response.
I've tried to reproduce this issue with a dedicated test but failed to get the NullPointerException
. Can you have a look at my branch and let me know if you're spotting something?
The change (switching the order of closing resources) would still make sense, but I would like to ensure that we're fixing the issue you're having. Thanks!
Comment From: Kiskae
You need to encode Flux.empty()
to trigger the NPE, because it is caused by buffering of the CSV headers and each emitted item causes those buffers to be flushed.
Comment From: bclozel
This is now fixed on the main branch, ready to be released with 6.1.1. I have scheduled a backport for 6.0.15 and 5.3.32 (no release dates scheduled for now). Sorry again about the delay and hopefully the fix will find its way into your applications.
Thanks for the help @Kiskae !