Pandas version checks

  • [x] I have checked that this issue has not already been reported.

  • [x] I have confirmed this bug exists on the latest version of pandas.

  • [ ] I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import re
import uuid
import numpy as np
import pandas as pd

## Generate example DataFrame
t = pd.date_range(start='2023-01-01 00:00', periods=10, freq='10min')
x = np.random.randn(t.size)
y = np.random.randn(t.size)
df = pd.DataFrame({
    'Timestamp': t,
    'X position (m)': x,
    'Y position (m)': y,
    'Temperature (degC)': temp,
})
df = pd.concat([
    pd.DataFrame(
        [dict(
            zip(list(df.columns),
            ['SignalId'] + [str(uuid.uuid4()) for i in range(df.columns.size - 1)]
        ))]
    ),
    df], ignore_index=True
)
df = df.set_index('Timestamp')

## Change column name inplace
for i, c in enumerate(list(df.columns)):
    newc = re.sub(r'\s+position\s+', ' ', c)
    df.columns.values[i] = newc

## Printing DataFrame to screen may generate a segmentation fault
df

Issue Description

When a column name from a DataFrame is changed inplace (at the values), sometimes it leads to a segmentation fault. This seems more likely if the DataFrame contains mixed element types (as per example below).

Hypotheses are: - The change in the name leads to corruption of the data in memory. - NumPy version >2 leads to different data types that may conflict somehow with some operations.

Example:

>>> import re
>>> import uuid
>>> import numpy as np
>>> import pandas as pd
>>> 
>>> t = pd.date_range(start='2023-01-01 00:00', periods=10, freq='10min')
>>> x = np.random.randn(t.size)
>>> y = np.random.randn(t.size)
>>> temp = np.random.randn(t.size)
>>> df = pd.DataFrame({
...     'Timestamp': t,
...     'X position (m)': x,
...     'Y position (m)': y,
...     'Temperature (degC)': temp,
... })
>>> df = pd.concat([
...     pd.DataFrame(
...         [dict(
...             zip(list(df.columns),
...             ['SignalId'] + [str(uuid.uuid4()) for i in range(df.columns.size - 1)]
...         ))]
...     ),
...     df], ignore_index=True
... )
>>> df = df.set_index('Timestamp')
>>> 
>>> df
                                           X position (m)                        Y position (m)                    Temperature (degC)
Timestamp                                                                                                                            
SignalId             da8a0a1b-a022-48cc-9e17-91b4b103cc5b  e92dad78-6128-45d5-8545-b45e80345da9  3106111b-0f53-4122-a89f-e1f78aac72b9
2023-01-01 00:00:00                               1.66612                              0.503874                             -0.202982
2023-01-01 00:10:00                             -1.266542                              0.141686                              0.488124
2023-01-01 00:20:00                              -0.46789                             -0.132084                             -1.011771
2023-01-01 00:30:00                              1.276952                             -0.811061                             -1.735414
2023-01-01 00:40:00                              1.178987                             -0.245169                              1.295712
2023-01-01 00:50:00                             -1.503673                               0.60517                             -0.946938
2023-01-01 01:00:00                             -1.095622                             -0.920928                             -0.233186
2023-01-01 01:10:00                             -1.276511                              0.710022                               1.94653
2023-01-01 01:20:00                             -0.470105                             -0.643144                              1.380882
2023-01-01 01:30:00                              1.426826                             -0.286228                              1.351435
>>> for i, c in enumerate(list(df.columns)):
...     newc = re.sub(r'\s+position\s+', ' ', c)
...     df.columns.values[i] = newc
... 
>>> df
Segmentation fault (core dumped)

Expected Behavior

Though the operation may be debatable (the change inplace of the column name via df.column.values[i] = new_name), it is a valid operation without any other warning or error message. The ensuing segmentation fault is completely random (so very hard to diagnose).

Hence the expected behaviour is to either block these operations, or alternatively to fully allow those if these are to be permitted.

Installed Versions

INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.11.11 python-bits : 64 OS : Linux OS-release : 4.19.0-27-amd64 Version : #1 SMP Debian 4.19.316-1 (2024-06-25) machine : x86_64 processor : byteorder : little LC_ALL : en_US.UTF-8 LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.2.3 numpy : 2.0.2 pytz : 2025.1 dateutil : 2.9.0.post0 pip : 24.0 Cython : None sphinx : None IPython : 8.18.1 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : 8.3.4 python-calamine : None pyxlsb : None s3fs : None scipy : 1.13.1 sqlalchemy : None tables : None tabulate : None xarray : 2024.7.0 xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.1 qtpy : None pyqt5 : None

Comment From: Liam3851

This is definitely not supported; you're modifying the internal numpy data inside the Index. The internal data, via .values, is not writeable anymore when using copy-on-write. Thus you can get your requested behavior where write access to Index.values is banned and generates an error by turning on copy-on-write in pandas 2 (see https://pandas.pydata.org/docs/user_guide/copy_on_write.html), or waiting for pandas 3 (when copy-on-write will be on by default).

In the meantime, the supported way to do what you are trying to do in your example would be df.columns = df.columns.str.replace(r'\s+position\s+', ' ', regex=True)

Comment From: Manju080

The segmentation fault mainly due to the memory corruption since it is not recommended with the current version. Although if you try to pandas 2 enables copy-on-write might throw proper error instead of corrupting memory. So instead of modifying the Index.values try to use df.columns = df.columns.str.replace(r'\s+position\s+', ' ', regex=True)

Comment From: rhshadrach

Thanks for the report. Agreed this is not supported, but I think that should be spelled out more clearly in the docstring of Index.values. PRs adding a note in this regard is welcome!

Comment From: cvr

In the meantime, the supported way to do what you are trying to do in your example would be df.columns = df.columns.str.replace(r'\s+position\s+', ' ', regex=True)

Thanks for the reply @Manju080. A bit more detail beyond the example. So this solution, or even the canonical one df.replace(columns={'X position (m)': 'X (m)', 'Y position (m)'}) would be the general safe approach.

But there is one situation where this would be insufficient: if two columns have the same exact name (which from my knowledge of Pandas is perfectly valid). Now imagine you would want to change only one of the column's name. This leads to one situation where the change inplace of the column name, through its ordinal index, would allow the clear identification of the column to be changed. Without this, the only safe solution is to deep copy all column names into a list, apply the change and and make the bulk replace df.columns = new_columns_name.

Comment From: cvr

The segmentation fault mainly due to the memory corruption since it is not recommended with the current version. Although if you try to pandas 2 enables copy-on-write might throw proper error instead of corrupting memory.

Yes, that seems like it. Note that I raised this issue not exactly to find a solution (I had one, do a bulk replace of the column names and now I also have yours) but to highlight that this leads to a bug really hard to diagnose, as there is no warning whatsoever or blocking of any sorts. As a user I would expect somehow for this operation to be blocked and that a traceback is raised. As I had wrote:

Hence the expected behaviour is to either block these operations, or alternatively to fully allow those if these are to be permitted.

Comment From: mroeschke

As a user I would expect somehow for this operation to be blocked and that a traceback is raised. As I had wrote:

In the next major release of pandas (3.0), this will indeed raise an error

In [3]: df = pd.DataFrame({"a": [1], "b": [2]})

In [4]: df.columns.values[0] = "c"
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[4], line 1
----> 1 df.columns.values[0] = "c"

ValueError: assignment destination is read-only

Comment From: Manju080

take

Comment From: cvr

Thanks @mroeschke and @Manju080.