Pandas version checks

  • [X] I have checked that this issue has not already been reported.

  • [X] I have confirmed this bug exists on the latest version of pandas.

  • [ ] I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

df_full = pd.DataFrame({
    'a': np.array([5.2, 3.3, 7.9], dtype='float64'), 
    'b': np.array([2342, 324234283, 2222394], dtype='timedelta64[ns]'),
    'c': np.array([2342, 324234283, 2222394], dtype='int64')})

df_empty = pd.DataFrame({
    'a': np.array([], dtype='float64'), 
    'b': np.array([], dtype='timedelta64[ns]'),
    'c': np.array([], dtype='int64')})

# If the DataFrame has data, the dtypes are preserved and results are identical to .apply(np.func)
print(df_full.apply(np.mean))
print(df_full.mean())
print(df_full.apply(np.sum))
print(df_full.sum())

print('#' * 40)

# If the DataFrame is empty, mean and sum behave differently:
# mean:
print(df_empty.apply(np.mean))  # all values in the series are float64
print(df_empty.mean())          # values have same dtypes as input
# sum:
print(df_empty.apply(np.sum))  # behaves similar to .apply(np.mean), converting all values to float64
print(df_empty.sum())          # TypeError: Cannot cast TimedeltaArray to dtype float64

Issue Description

There are two related issues:

Summing over a DataFrame with a empty column of dtype timedelta64[ns] raises TypeError. Summing over an empty series of this dtype works out fine and returns a timedelta of 0, as expected.

The second issue is, that df.mean() and df.sum() behave differently concerning the output dtype. mean keeps the output types from the input, resulting in a series of dtype object whereas sum converts the result values to float64.

mean and sum are only examples, the same arguments can be made for other functions like median, std etc. as well.

Expected Behavior

For the first issue, I'd expect the same behavior as for the Series, i.e. timedelta 0 for the empty timedelta column.

For the second issue I'd expect the outputs of DataFrame aggregation functions have a consistent dtype strategy. In my opinion, the behavior of mean is correct, i.e. keeping the input data types.

Removing the if-block

if axis == 0 and len(self) == 0 and name in ["sum", "prod"]:
    # Even if we are object dtype, follow numpy and return
    #  float64, see test_apply_funcs_over_empty
    out = out.astype(np.float64)

from DataFrame._reduce() fixes the TypeError and makes the behavior of sum consistent with mean. On the other hand, the test_apply_funcs_over_empty test will then fail, because the results aren't consistent with .apply(np.func) anymore.

As mentioned, I'd prefer the behavior of mean, but that's probably for someone with better insights into the implementation and API design to decide. But even if the conversion to float64 is intentional, this should probably be consistent between aggregation functions, e.g. not only for sum, but also for mean, that preserves the dtype at the moment.

Installed Versions

INSTALLED VERSIONS ------------------ commit : 8dab54d6573f7186ff0c3b6364d5e4dd635ff3e7 python : 3.11.0.final.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.22621 machine : AMD64 processor : Intel64 Family 6 Model 94 Stepping 3, GenuineIntel byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : de_DE.cp1252 pandas : 1.5.2 numpy : 1.24.1 pytz : 2022.7 dateutil : 2.8.2 setuptools : 65.5.0 pip : 22.3.1 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : None pandas_datareader: None bs4 : None bottleneck : None brotli : None fastparquet : None fsspec : None gcsfs : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : None pyreadstat : None pyxlsb : None s3fs : None scipy : None snappy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlwt : None zstandard : None tzdata : None

Comment From: rhshadrach

Thanks for the report - there are many reports for inconsistencies with empty frames in this regard. See in particular #47959 and also the issues linked in that one.

In general, I think users should not expect df.mean() and df.agg(lambda x: np.mean(x) to act the same. The former is implemented in pandas and that code knows how the function behaves. The latter is a black box where pandas has no idea what it's supposed to do. So when given an empty input, pandas can return the "right" answer (even if we sometimes don't at current state) whereas with the latter we have to guess at the result. That isn't to say the current state can't be improved, but users can't expect these to act the same.

print(df_empty.mean()) # values have same dtypes as input

Can you explain what this means? I get

a    NaN
b    NaT
c    NaN
dtype: object

which is object because no dtype can hold both NaN and NaT except object. If I remove the timedelta column, I get

a   NaN
c   NaN
dtype: float64

In general, mean for numeric dtypes should always return floats, and that is what I'm currently seeing.

Comment From: rhshadrach

From the OP

# If the DataFrame is empty, mean and sum behave differently:
# mean:
print(df_empty.apply(np.mean))  # all values in the series are float64
print(df_empty.mean())          # values have same dtypes as input
# sum:
print(df_empty.apply(np.sum))  # behaves similar to .apply(np.mean), converting all values to float64
print(df_empty.sum())          # TypeError: Cannot cast TimedeltaArray to dtype float64

I think prints 1 and 3 are part of #47959 and print 2 looks correct. print 4 appears to be a bug, and hasn't been reported elsewhere. So I think successfully closing this issue can be done by resolving this bug. @JBamberger let me know if I missed something or you disagree with any of the assessments.

Comment From: JBamberger

Thank you for the additional information and for taking the time to look at this issue. I agree with your assessment.