Pandas version checks
-
[X] I have checked that this issue has not already been reported.
-
[X] I have confirmed this bug exists on the latest version of pandas.
-
[X] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Sorry, this is not easily reproducible as the dataframe interchange protocol for pyarrow is still work in progress but I think the error is quite clear:
import pyarrow as pa
table = pa.table({"a": [1, 2, 3, None]})
exchange_df = table.__dataframe__()
from pandas.core.interchange.from_dataframe import from_dataframe
from_dataframe(exchange_df)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/alenkafrim/repos/pyarrow-dev-9/lib/python3.9/site-packages/pandas/core/interchange/from_dataframe.py", line 53, in from_dataframe
return _from_dataframe(df.__dataframe__(allow_copy=allow_copy))
File "/Users/alenkafrim/repos/pyarrow-dev-9/lib/python3.9/site-packages/pandas/core/interchange/from_dataframe.py", line 74, in _from_dataframe
pandas_df = protocol_df_chunk_to_pandas(chunk)
File "/Users/alenkafrim/repos/pyarrow-dev-9/lib/python3.9/site-packages/pandas/core/interchange/from_dataframe.py", line 122, in protocol_df_chunk_to_pandas
columns[name], buf = primitive_column_to_ndarray(col)
File "/Users/alenkafrim/repos/pyarrow-dev-9/lib/python3.9/site-packages/pandas/core/interchange/from_dataframe.py", line 160, in primitive_column_to_ndarray
data = set_nulls(data, col, buffers["validity"])
File "/Users/alenkafrim/repos/pyarrow-dev-9/lib/python3.9/site-packages/pandas/core/interchange/from_dataframe.py", line 504, in set_nulls
null_pos = buffer_to_ndarray(valid_buff, valid_dtype, col.offset, col.size)
File "/Users/alenkafrim/repos/pyarrow-dev-9/lib/python3.9/site-packages/pandas/core/interchange/from_dataframe.py", line 395, in buffer_to_ndarray
raise NotImplementedError(f"Conversion for {dtype} is not yet supported.")
NotImplementedError: Conversion for (<DtypeKind.BOOL: 20>, 1, 'b', '=') is not yet supported.
Issue Description
I am currently working on implementing a dataframe interchange protocol for pyarrow.Table
in Apache Arrow project (https://github.com/apache/arrow/pull/14613).
I am using pandas implementation to test that the produced __dataframe__
object can be correctly consumed.
When consuming a pyarrow.Table
with missing values I get an NotImplementedError
. The bitmasks, used by PyArrow to represent nulls in a given column, can not be converted.
But if I look at the code in from_dataframe.py: https://github.com/pandas-dev/pandas/blob/70121c75a0e2a42e31746b6c205c7bb9e4b9b930/pandas/core/interchange/from_dataframe.py#L405-L415
I would think this is not intentional and that the _NP_DTYPES
should include {1: bool}
https://github.com/pandas-dev/pandas/blob/70121c75a0e2a42e31746b6c205c7bb9e4b9b930/pandas/core/interchange/from_dataframe.py#L393
https://github.com/pandas-dev/pandas/blob/70121c75a0e2a42e31746b6c205c7bb9e4b9b930/pandas/core/interchange/from_dataframe.py#L23-L28
Expected Behavior
The bitmask can be converted to ndarray
by the current pandas implementation of the dataframe interchange protocol and the code below could work for missing values also:
>>> import pyarrow as pa
>>> table = pa.table({"a": [1, 2, 3, 4]})
>>> exchange_df = table.__dataframe__()
>>> exchange_df._df
pyarrow.Table
a: int64
----
a: [[1,2,3,4]]
>>> from pandas.core.interchange.from_dataframe import from_dataframe
>>> from_dataframe(exchange_df)
a
0 1
1 2
2 3
3 4
Installed Versions
Comment From: AlenkaF
cc @jorisvandenbossche @mroeschke
Comment From: mroeschke
Thanks for the report @AlenkaF!
As an aside related to converting a pa.Table
to a pd.DataFrame
, it would be very interesting to keep the pyarrow types in a pandas DataFrame since pandas supports a built-in pd.ArrowExtensionArray
. Of course more would need refactored on the pandas exchange side to construct a pd.DataFrame
using pd.ArrowExtensionArray
instead of numpy arrays.
Comment From: jorisvandenbossche
it would be very interesting to keep the pyarrow types in a pandas DataFrame
I think that is something that is fully controlled on the pandas side? PyArrow will just expose the buffers, and it's up to pandas to decide how to reassemble those in arrays (except for boolean dtype, where the spec currently requires a byte array, so that's not possible zero-copy for pyarrow. But that is also something that maybe should be discussed on the Data APIs side to include that in the spec?)
Comment From: mroeschke
I think that is something that is fully controlled on the pandas side?
Yeah agreed. I guess during the consumption of the interchange by pandas there can be a "mode" that can be configure to say whether to consume the buffers as arrow objects or numpy objects
Comment From: MarcoGorelli
I would think this is not intentional and that the _NP_DTYPES should include {1: bool}
@AlenkaF I tried this, but then I get:
In [1]: import pyarrow as pa
...: table = pa.table({"a": [1, 2, 3, None]})
...:
...: exchange_df = table.__dataframe__()
...:
...: from pandas.core.interchange.from_dataframe import from_dataframe
...: from_dataframe(exchange_df)
Out[1]:
a
0 1.0
1 NaN
2 NaN
3 NaN
which doesn't look right. Do you know what else would need changing?
Comment From: AlenkaF
Hm, first thing where I would look at is this line: https://github.com/pandas-dev/pandas/blob/70121c75a0e2a42e31746b6c205c7bb9e4b9b930/pandas/core/interchange/from_dataframe.py#L407
and would just print out if the numpy array is actually what we expect. And if not, maybe the shape needs to be corrected (still needs // 8
maybe? Just guessing though).
Comment From: MarcoGorelli
thanks for taking a look! buffer.bufsize
here is 1
, // 8
would make it 0
here, arr
is just
(Pdb) arr
array([ True])
Comment From: jorisvandenbossche
This needs some custom code, since numpy only supports arrays with size of number of bytes, not bits (minimum element length is 1 byte). So you can't just view this bitmap as a numpy array without some custom processing.
We do have that processing in our arrow_utils (for converting pyarrow arrays to masked arrays), however, that assumes that pyarrow is installed, so that would make accepting bitmasks in the dataframe interchange protocol dependent on pyarrow (which is probably fine, though)
Comment From: MarcoGorelli
is that pyarrow_array_to_numpy_and_mask
? what would you pass to it? (sorry, I'm new to this - I'll take a look, but it may take me some time)
Comment From: jorisvandenbossche
Yes, that's indeed pyarrow_array_to_numpy_and_mask
. Now, you can't use this directly here, but it has some similar code needed for this (so either copy what you need, or refactor to reuse the shared bit). What this function does is getting the buffers of the pyarrow array (arr.buffers()
), and then convert those buffers to numpy arrays. While for this use case, we get the buffers from the dataframe interchange protocol object.
https://github.com/pandas-dev/pandas/blob/961f9c4d7899121669c4832512c5903db573bb47/pandas/core/arrays/arrow/_arrow_utils.py#L50-L66
I am not fully sure to what extent the comment about padding and offset applies here as well.
The actual conversion of the bitmap buffer to a numpy boolean array is this part:
https://github.com/pandas-dev/pandas/blob/961f9c4d7899121669c4832512c5903db573bb47/pandas/core/arrays/arrow/_arrow_utils.py#L59-L63
So what this does is actually converting this bitmap back to a pyarrow boolean array (because this can be done zero-copy, a boolean array also uses bits for its values, similar as the validity bitmap of another array), to then uses the implementation of pyarrow to convert this array (using bits) to a numpy array (using bytes).
(in theory we could write our own code in cython/c to convert a buffers of bits to bytes, but personally I wouldn't worry about that)
Comment From: AlenkaF
Would it be an option to just check for pyarrow arrays in set_nulls
(and similarly in string_column_to_ndarray
):
https://github.com/pandas-dev/pandas/blob/70121c75a0e2a42e31746b6c205c7bb9e4b9b930/pandas/core/interchange/from_dataframe.py#L504
and instead of using buffer_to_ndarray
use pyarrow_array_to_numpy_and_mask
instead?
Comment From: MarcoGorelli
thanks @jorisvandenbossche !
I was trying that, but get
(Pdb) pyarrow.BooleanArray.from_buffers(pyarrow.bool_(), length, [None, buffer], offset=offset)
*** TypeError: Cannot convert _PyArrowBuffer to pyarrow.lib.Buffer
Comment From: AlenkaF
Would it be an option to just check for pyarrow arrays in
set_nulls
(and similarly instring_column_to_ndarray
):https://github.com/pandas-dev/pandas/blob/70121c75a0e2a42e31746b6c205c7bb9e4b9b930/pandas/core/interchange/from_dataframe.py#L504
and instead of using
buffer_to_ndarray
usepyarrow_array_to_numpy_and_mask
instead?
Ignore my comment please - thought it would be a good idea but it would not work for general dataframe library that uses bitmasks.
Comment From: jorisvandenbossche
Would it be an option to just check for pyarrow arrays in
set_nulls
(and similarly instring_column_to_ndarray
):
In theory, the code for the dataframe interchange protocol doesn't know that the protocol object is backed by pyarrow? (without checking that obj._col
is a pyarrow.Array, which would be relying on an implementation detal)
Comment From: AlenkaF
thanks @jorisvandenbossche !
I was trying that, but get
(Pdb) pyarrow.BooleanArray.from_buffers(pyarrow.bool_(), length, [None, buffer], offset=offset) *** TypeError: Cannot convert _PyArrowBuffer to pyarrow.lib.Buffer
I think _PyArrowBuffer
is a dataframe protocol object and pyarrow doesn't recognise it.
Comment From: jorisvandenbossche
I was trying that, but get
(Pdb) pyarrow.BooleanArray.from_buffers(pyarrow.bool_(), length, [None, buffer], offset=offset) *** TypeError: Cannot convert _PyArrowBuffer to pyarrow.lib.Buffer
Yeah, the _PyArrowBuffer
implements the protocol interface, but that is not a generic "buffer like object). But I suppose you can use pa.foreign_buffer(..)
to create a pyarrow.Buffer object from the protocol's buffer (specifying pointer and size)
Comment From: MarcoGorelli
thanks! yeah if I use
arr = pa.BooleanArray.from_buffers(
pa.bool_(),
length,
[None, pa.foreign_buffer(buffer.ptr, length)],
offset=offset,
)
return np.asarray(arr)
then it all seems to work as-expected