Pandas version checks
-
[X] I have checked that this issue has not already been reported.
-
[X] I have confirmed this bug exists on the latest version of pandas.
-
[X] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
import numpy as np
import pandas as pd
print(np.__version__) # 2.0.2
print(pd.__version__) # 2.2.3
data = pd.Series([333, 555])
# accessing scalar via __getitem__ returns <class 'numpy.int64'>
print(type(data[0]))
# accessing scalar via iteration returns <class 'int'>
print(type(next(iter(data))))
Issue Description
numpy 2.0 recently changed its representation of scalars to include type information. However, pandas produces inconsistent return types when one is accessing scalars with __getitem__
vs iterating over items, as demonstrated in the example code snippet.
This inconsistency is showing up in downstream projects like NetworkX: https://github.com/networkx/networkx/issues/7763#issuecomment-2532716537
Expected Behavior
pandas should produce consistent return types when one is accessing scalars with __getitem__
vs iterating over items
Installed Versions
Comment From: gboeing
59838 is loosely related.
Comment From: rhshadrach
Thanks for the report, somewhat related to https://github.com/pandas-dev/pandas/issues/25969.
I also thought we had an issue as whether to return Python or NumPy scalars across the API, but can't seem to find it. I'm not sure whether there has been any prior discussions on this, cc @pandas-dev/pandas-core.
I'll also add there is inconsistency in reductions. On NumPy dtypes (e.g. int64
):
NumPy scalars: - sum - mean - prod - quantile - sem - count
Python scalars: - median - min - max - kurt - skew - nunique
On PyArrow dtypes (e.g int64[pyarrow]
), these are all Python scalars with the exception of count
which gives a NumPy scalar.
Comment From: gboeing
Thanks @rhshadrach. It's been a couple months so I just wanted to check back on this to see if any further information is needed?
Comment From: Dr-Irv
This is a pretty old feature. Here's the release note about it in v0.21.0, from over 7 years ago:
https://pandas.pydata.org/docs/whatsnew/v0.21.0.html#iteration-of-series-index-will-now-return-python-scalars
With numpy 1.26.4
, we get the following with the example code:
>>> import numpy as np
>>> import pandas as pd
>>>
>>> print(np.__version__) # 2.0.2
1.26.4
>>> print(pd.__version__) # 2.2.3
2.2.2
>>>
>>> data = pd.Series([333, 555])
>>>
>>> # accessing scalar via __getitem__ returns <class 'numpy.int64'>
>>> print(type(data[0]))
<class 'numpy.int64'>
>>>
>>> # accessing scalar via iteration returns <class 'int'>
>>> print(type(next(iter(data))))
<class 'int'>
So with older versions of numpy
, our implementation of iter()
matched theirs - we returned a python scalar. Now it's a question whether we go back. Definitely needs discussion.
Comment From: rhshadrach
our implementation of
iter()
matched theirs - we returned a python scalar. Now it's a question whether we go back. Definitely needs discussion.
I would prefer if all pandas operations that returned scalars returned Python, rather than NumPy objects. This is especially true if we are wanting to make migration to arrow-backed dtypes easier.
Comment From: rhshadrach
cc @pandas-dev/pandas-core