Pandas version checks

  • [x] I have checked that this issue has not already been reported.

  • [x] I have confirmed this bug exists on the latest version of pandas.

  • [ ] I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

df = pd.DataFrame(
    {
        "ts": pd.to_datetime(
            ["2021-01-01 00:00:00", "2021-01-01 00:00:01", "2021-01-01 00:00:02"]
        ).astype("datetime64[s]"),
    }
)

df.to_parquet("/tmp/df.parquet")
df2 = pd.read_parquet("/tmp/df.parquet")

Issue Description

Then

df.dtypes

gives

ts    datetime64[s]
dtype: object

but

df2.dtypes

gives

ts    datetime64[ms]
dtype: object

Expected Behavior

I would expect

df2.dtypes

to give

ts    datetime64[s]
dtype: object

Installed Versions

INSTALLED VERSIONS
------------------
commit                : 0691c5cf90477d3503834d983f69350f250a6ff7
python                : 3.11.10
python-bits           : 64
OS                    : Darwin
OS-release            : 24.2.0
Version               : Darwin Kernel Version 24.2.0: Fri Dec  6 19:02:41 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6030
machine               : arm64
processor             : arm
byteorder             : little
LC_ALL                : en_US.UTF-8
LANG                  : en_US.UTF-8
LOCALE                : en_US.UTF-8

pandas                : 2.2.3
numpy                 : 1.24.4
pytz                  : 2024.2
dateutil              : 2.9.0.post0
pip                   : 24.2
Cython                : 3.0.11
sphinx                : None
IPython               : 8.31.0
adbc-driver-postgresql: None
adbc-driver-sqlite    : None
bs4                   : 4.12.3
blosc                 : None
bottleneck            : 1.3.7
dataframe-api-compat  : None
fastparquet           : None
fsspec                : 2023.12.2
html5lib              : 1.1
hypothesis            : 6.123.17
gcsfs                 : None
jinja2                : 3.1.5
lxml.etree            : 5.3.0
matplotlib            : 3.8.4
numba                 : 0.60.0
numexpr               : 2.10.2
odfpy                 : None
openpyxl              : 3.1.5
pandas_gbq            : None
psycopg2              : 2.9.10
pymysql               : None
pyarrow               : 16.1.0
pyreadstat            : None
pytest                : 7.4.4
python-calamine       : None
pyxlsb                : None
s3fs                  : 2023.12.2
scipy                 : 1.13.0
sqlalchemy            : 2.0.36
tables                : 3.9.2
tabulate              : 0.9.0
xarray                : 2025.1.1
xlrd                  : 2.0.1
xlsxwriter            : 3.2.0
zstandard             : 0.23.0
tzdata                : 2024.2
qtpy                  : None
pyqt5                 : None

Comment From: rhshadrach

Thanks for the report! Further investigations and PRs to fix are welcome.

Comment From: snitish

This seems to be a known issue in pyarrow. TLDR: arrow internally uses millis, micros and nanos as unit. We could always re-cast it to the right unit downstream in pd.read_parquet using the pandas metadata, but it's up for debate whether this fix should be made here or on pyarrow side.

Comment From: rhshadrach

Not too familiar with datetime64 - is it true that there can be a loss in precision - that is, that second resolution can represent a wider span of datetimes than microsecond? If that is the case, I would stay away from casting here.

Comment From: Anurag-Varma

@rhshadrach From the pull request of the issue mentioned by @snitish in pyarrow.

The merged pr files show that pyarrow supports milli, micro, nano seconds.

If seconds is given, then they are giving an error which is mentioned in the issue as it's getting converted to milli seconds

I tested the same for below code by @bmwilly :

df = pd.DataFrame(
    {
        "ts": pd.to_datetime(
            ["2021-01-01 00:00:00", "2021-01-01 00:00:01", "2021-01-01 00:00:02"]
        ).astype("datetime64[s]"),
    }
)

df.to_parquet("/tmp/df.parquet")
df2 = pd.read_parquet("/tmp/df.parquet")

And the only issue is with s being converted to ms.

But if ms or us or ns are used, then both df.dtypes & df2.dtypes return the same object

Comment From: rhshadrach

Thanks @Anurag-Varma. My guess is that if you're using s resolution, then you're doing it to support a wide range of datetimes (e.g. in astronomy). In which case making it apparent that parquet values are being written to disk in ms resolution could be better behavior than silently casting for users. This way there is at least a clue as to what is happening, rather than a loss of precision on certain values.

Comment From: alherrera-cs

Hi, I'd like to work on this issue. I noticed that when a DataFrame with a datetime64[s] column is written to and then read from a parquet file, it gets converted to datetime64[ms], which causes a loss of second-level precision. I believe this behavior might be improved by either preserving the original resolution or at least issuing a clear warning to users. I’m interested in investigating whether a solution can be implemented in pandas or if this should be addressed on the pyarrow side.

Please let me know if I can be assigned to work on this, or if there are any guidelines I should follow before proceeding. Thanks!