Spring Data R2DBC's DatabaseClient provided a rich fluent API including functionality that allowed a convenient mapping of results into a particular type through as(Class). During the migration of DatabaseClient we decided to drop that functionality to provide functionality that we knew how to provide from a Spring Framework perspective. Target object mapping in Spring Data used core Spring Data functionality that isn't available on the Spring Framework level.

We see a lot of demand for a method that is convenient to use from the caller side and that applies convention-based mapping of primitives (Java primitives and R2DBC primitives) and data classes.

Right now, mapping requires a mapping function to be used through map(BiFunction<Row, RowMetadata, T>).

Comment From: jhoeller

I've added this to our backlog for the time being but I'd be happy to bring it into a 5.3.x release soon.

To get started, ideally we'd have at least a sketch of what such a method's implementation should look like. Are we trying to make this fully pluggable or are we rather just baking in some common cases out of the box, leaving the rest up to custom mapping functions?

Comment From: mp911de

Generally speaking, it would be convenient to use Spring Data (or any other mapping framework) if it is on the classpath when mapping query results, regardless of the SQL connector technology (JDBC or R2DBC). A pluggable variant could provide streamlined instantiation strategies (for example reflection-less) or apply column-name mappings but it comes with increased complexity and the need for an SPI.

A non-pluggable arrangement could serve for the most common use-cases. Such an API needs to differentiate whether the target type is expected to map onto a single column (as(Long.class) (Java-primitive), as(LocalDateTime.class) (R2DBC primitive), as(Blob.class) (R2DBC primitive), as(Json.class) (R2DBC Postgres primitive)) or whether the result should be mapped onto a data class/record/POJO.

Since R2DBC driver primitives vary across drivers, it doesn't make sense to update our framework code for each new driver that we discover. I wonder whether it would make sense to provide an API on R2DBC level that allows consumers to identify R2DBC and driver primitives (r2dbc/r2dbc-spi#192).

Comment From: schauder

Could we port BeanPropertyRowMapper to R2DBC?

Comment From: cparaskeva

Any updates on that ?!

Comment From: nitw-shobhit

It seems org.springframework.data.r2dbc.core.DatabaseClient.as(--) was also doing some kind of magic to Instant type fields. Because when I use org.springframework.r2dbc.core.DatabaseClient.map instead and map like row.get("...", Instant.class), it loses the TZ info and I get it in local time zone.

Comment From: pawelryznar

@mp911de

Right now, mapping requires a mapping function to be used through map(BiFunction<Row, RowMetadata, T>).

How to handle custom converters in this case? I have some JSONB field, which requires custom converter, and when I'm trying to call row.get("fieldA", "MyCustomType::class.java") in the mapping function, I'm getting:

Suppressed: java.lang.IllegalArgumentException: Cannot decode value of type MyCustomType at io.r2dbc.postgresql.codec.DefaultCodecs.decode(DefaultCodecs.java:153) at io.r2dbc.postgresql.PostgresqlRow.decode(PostgresqlRow.java:90) at io.r2dbc.postgresql.PostgresqlRow.get(PostgresqlRow.java:77)

Comment From: frayneposset

Not sure whether this is the right issue to report this, but the documentation still says 'as' is supported. Is the documentation wrong or has this issue been resolved ?

See here:

https://spring.io/projects/spring-data-r2dbc

which has this example code in it:

Flux<Person> all = client.execute() .sql("SELECT id, name FROM person") .as(Person.class) .fetch().all();

Comment From: anudeep-mj

Not sure whether this is the right issue to report this, but the documentation still says 'as' is supported. Is the documentation wrong or has this issue been resolved ?

See here:

https://spring.io/projects/spring-data-r2dbc

which has this example code in it:

Flux<Person> all = client.execute() .sql("SELECT id, name FROM person") .as(Person.class) .fetch().all();

No thats not available anymore and is deprecated.

Comment From: ah1508

Following up on @schauder suggestion to port BeanClassRowMapper from JdbcTemplate, a port of DataClassRowMapper would also be helpful (for records).

Comment From: simonbasle

There seems to be a bit more depth to this issue, but I'm in the process of porting the BeanPropertyRowMapper and DataClassRowMapper as R2DBC-compatible mapping functions. This is explored in PR gh-30530 (which doesn't supersedes this issue).

Comment From: jhoeller

Note that something similar is available through query(Class) on the new JdbcClient in 6.1. See #30931 and https://github.com/spring-projects/spring-framework/issues/26594#issuecomment-1678725276 for the context there.

Unfortunately, this is not totally straightforward to provide with the R2DBC DatabaseClient since we do not have the equivalent of SingleColumnRowMapper and SimplePropertyRowMapper there yet. We'll see what we can do about it, along with #27282 for parameter source objects along the lines of paramSource(Object) on JdbcClient.

Comment From: jhoeller

I'm introducing a mapProperties(Class) method on DatabaseClient, supporting bean properties and record components for creating a result object per row. In addition, the accompanying mapValue(Class) provides a simple as-style mapping to a database-supported value type, extracting the first column with the given type via the R2DBC driver.

In contrast to JdbcClient, those are provided as distinct methods rather than a unified map(Class) since there is no good rule for differentiating between a database-supported value type and a bean/record. The mapProperties variant uses DataClassRowMapper whereas mapValue calls the corresponding Row.get method with index 0 and the given type.

Note that plain field holders are not supported since this does not seem idiomatic with R2DBC. Record classes or custom classes with constructors and/or bean-style accessors can be very concise and are actually better suited for inline use in a reactive pipeline, this is showing particularly well for parameter objects (#27282) which we also support now.