Sample Code

import pandas as pd
import numpy as np


df = pd.DataFrame(data={'a': [-0.04096, -0.04096, -0.04096, -0.04096, -0.04096],
                        'b': [1., 2., 3., 4., 5.],
                        'c': [0.053646, 0.053646, 0.053646, 0.053646, 0.053646]},
                  dtype=np.float64)
corr_df = df.corr()

s_a = pd.Series(data=[-0.04096, -0.04096, -0.04096, -0.04096, -0.04096],
                dtype=np.float64, name='a')
s_b = pd.Series(data=[1., 2., 3., 4., 5.], index=[1, 2, 3, 4, 5], dtype=np.float64, name='b')
s_c = pd.Series(data=[0.053646, 0.053646, 0.053646, 0.053646, 0.053646],
                dtype=np.float64, name='c')

# Trying to rebuild the correlation matrix from above with the pandas.Series version.
# np.nan is used because correlation with the same Series does not work.
corr_series_new = pd.DataFrame(
    {'a': [np.nan,        s_a.corr(s_b), s_a.corr(s_c)],
     'b': [s_b.corr(s_a), np.nan,        s_b.corr(s_c)],
     'c': [s_c.corr(s_a), s_c.corr(s_b), np.nan       ]}
)

corr_series_old = pd.DataFrame(
    {'a': [np.nan,                df['a'].corr(df['b']), df['a'].corr(df['c'])],
     'b': [df['b'].corr(df['a']), np.nan,                df['b'].corr(df['c'])],
     'c': [df['c'].corr(df['a']), df['c'].corr(df['b']), np.nan               ]}
)

Problem description

1

For some reason pandas.DataFrame.corr() and pandas.Series.corr(other) show different behavior. In general, the correlation between two Series is not defined when one Series does not have varying values, like e.g. s_a or s_c, as the denominator of the correlation function is evaluated to zero, resulting in a by-zero-division. However, the correlation function defined in DataFrame somehow manages to evaluate something as shown in the following result:

>>> corr_df
    a    b    c
a NaN  NaN  NaN
b NaN  1.0  0.0
c NaN  0.0  1.0
2

The above results do also not match when working with Series, which should be expected(?). Note that I have explicitly put NaNs at the identities since e.g. s_b.corr(s_b) does yield an Error.

>>> corr_series_new
    a   b   c
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3

Another problem is that by using the existing data instead of newly created series, we get different results.

>>> corr_series_old
    a    b    c
0 NaN  NaN  NaN
1 NaN  NaN  0.0
2 NaN  0.0  NaN

I hope I did not miss anything.

Expected Output

Both methods in Series and DataFrame should produce the same output.

Output of pd.show_versions()

pd.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.6.4.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-39-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.22.0 pytest: None pip: 9.0.1 setuptools: 38.4.0 Cython: None numpy: 1.14.2 scipy: 1.0.1 pyarrow: None xarray: None IPython: 6.3.1 sphinx: 1.7.2 patsy: 0.5.0 dateutil: 2.7.2 pytz: 2018.4 blosc: None bottleneck: None tables: None numexpr: None feather: None matplotlib: 2.2.2 openpyxl: None xlrd: None xlwt: None xlsxwriter: None lxml: None bs4: None html5lib: 1.0.1 sqlalchemy: None pymysql: None psycopg2: None jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None

Comment From: jreback

So it seems we are taking the name into account when aligning these for Series [45]. The DataFrame corr effectively ignores this, e.g. [46]

In [45]: Series(s_c.values).corr(Series(s_b))
Out[45]: nan

In [46]: Series(s_c.values).corr(Series(s_b.values))
Out[46]: 0.0

So I think its reasonable to align (e.g. you match index values), but ignore the name. Would take a PR for this, it might be slightly tricky as the magic is done in .align

Comment From: BobMcFry

I was looking into this, when I realized that I made a mistake reporting the bug. In s_b I forgot to eliminate the explicit index argument, resulting, of course, in strange behavior. This renders most part of this bug report pointless (sorry for that) because after removing the index argument, both methods in Series and DataFrame are producing the same results as stored in corr_df.

A further remark: I was still confused why the results of the correlation were not consistent. Theoretically the result for corr_dfshould look something like:

>>> corr_df
    a    b    c
a NaN  NaN  NaN
b NaN  1.0  NaN
c NaN  NaN  NaN

Whenever at least one vector of data has a standard deviation of zero, the resulting correlation should always show the same results. In the example above s_a and s_c have non-varying data which thus should result in 'NaN' values. Long story short, due to floating point problems the std dev of s_c is

>>> s_c.std() 
7.757919228897728e-18

which explains these inconsistencies. I guess this bug report can be closed unless this numerical problem needs to be discussed any further.

I am sorry for the inconvenience.

Comment From: lucifermorningstar1305

Well this is happening because if you look at the formula for correlation which is as follows:

r_(x,y) = sum((x[i] - x_mean) * (y[i] - y_mean) ) / sqrt(sum((x[i] - x_mean)**2) * sum((y[i] - y_mean)**2) )

Now as per your given table when I subtract a - a_mean I get 0, now as the denominator becomes 0, hence the answer is np.nan for the correlation.

Comment From: nermin99

Related: https://stackoverflow.com/questions/75308316/pandas-dataframe-corr-doesnt-give-same-results-as-series-corr

Comment From: nermin99

After doing some experimentation I can conclude with great confidence that the inconsistency between DataFrame.corr and Series.corr is because of misaligned indexes.

See my answer here https://stackoverflow.com/a/75833486/7012917

pandas.Series.corr will compute the correlation for rows with matching index.