Pandas version checks

  • [X] I have checked that this issue has not already been reported.

  • [X] I have confirmed this bug exists on the latest version of pandas.

  • [X] I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd

if __name__ == "__main__":
    df = pd.DataFrame(
        [
            ["2023-09-29 02:55:54"],
            ["2023-09-29 02:56:03"],
        ],
        columns=["timestamp"],
        dtype="datetime64[ms]",
    )

    serialized = df.to_json()
    print(serialized)
    # Got: {"timestamp":{"0":1695956,"1":1695956}}
    # Should be: {"timestamp":{"0":1695956154000,"1":1695956163000}}
    deserialized = pd.read_json(serialized, convert_dates=["timestamp"])
    print(pd.to_datetime(deserialized["timestamp"], unit="ms"))
    # Got:
    # 0   1970-01-01 00:28:15.956
    # 1   1970-01-01 00:28:15.956
    # Instead of:
    # 0   2023-09-29 02:55:54
    # 1   2023-09-29 02:56:03

Issue Description

When a dataframe contains a column which dtype is datetime64[ms] and one tries to convert it to json using df.to_json() the data does not correspond to the correct value. So trying to convert it back will give the default timestamp. See example.

Expected Behavior

The json values for the timestamp should be the correspoinding one for the given date string so it we can restore the correct date/time. It works when using datetime64[ns].

Installed Versions

INSTALLED VERSIONS ------------------ commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2 python : 3.12.1.final.0 python-bits : 64 OS : Linux OS-release : 5.4.0-150-generic Version : #167~18.04.1-Ubuntu SMP Wed May 24 00:51:42 UTC 2023 machine : x86_64 processor : byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.2.1 numpy : 1.26.4 pytz : 2024.1 dateutil : 2.8.2 setuptools : 69.1.1 pip : 23.3.1 Cython : None pytest : 8.0.2 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : 2.9.9 jinja2 : 3.1.3 IPython : None pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None gcsfs : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : 3.1.2 pandas_gbq : None pyarrow : 15.0.0 pyreadstat : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : 2.0.27 tables : None tabulate : None xarray : None xlrd : None zstandard : 0.22.0 tzdata : 2024.1 qtpy : None pyqt5 : None None

Comment From: jbrockmendel

xref #55827

Comment From: Ricardus312

I encountered this same problem with the to_datetime64() function when upgrading to version 2.2

According to the documentation, until version 2.0.3 the pandas.Timestamp.to_datetime64 function returned "a numpy.datetime64 object with 'ns' precision". Since version 2.1.4 that function returns "a numpy.datetime64 object with same precision".

This change has broken critical parts of my code that assumed conversion with nanosecond precision, and now depending on the decimals of the argument passed the behavior of the function is unpredictable.

I recommend downgrading to an older version of pandas to resolve it.

Comment From: lithomas1

If you specify date_format="iso" in the to_json call, the round-tripping happens successfully.

(Note: the dtype will still be datetime64[ns], though for stuff in range for datetime64ns. I don't know if we should be reading stuff that's in-bounds for datetime64[ns] as non-nano)

It looks like we can do better with preserving a dtype for non-nano datetimes, though.

Running

import pandas as pd

if __name__ == "__main__":
    df = pd.DataFrame(
        [
            ["1000-09-29 02:55:54"],
            ["1000-09-29 02:56:03"],
        ],
        columns=["timestamp"],
        dtype="datetime64[ms]",
    )
    import io
    serialized = df.to_json(date_format="iso")
    print(serialized)
    deserialized = pd.read_json(io.StringIO(serialized), convert_dates=["timestamp"])
    print(deserialized)
    print(deserialized.dtypes)

I get object dtype for the deserialized json timestamp column

Output

{"timestamp":{"0":"1000-09-29T02:55:54.000","1":"1000-09-29T02:56:03.000"}}
                 timestamp
0  1000-09-29T02:55:54.000
1  1000-09-29T02:56:03.000
timestamp    object
dtype: object

Comment From: Tikonderoga

Adding date_unit='ns' to to_json call makes serialization work properly, even if the index or column type still datetime64[ms]