Pandas version checks
-
[X] I have checked that this issue has not already been reported.
-
[X] I have confirmed this issue exists on the latest version of pandas.
-
[X] I have confirmed this issue exists on the main branch of pandas.
Reproducible Example
from itertools import product
import time
import numpy as np
import pandas as pd
NUM_ROWS = [100, 150, 200]
NUM_COLUMNS = [1000, 10000, 50000]
NUM_DFS = [3, 5, 7, 9]
test_cases = product(NUM_ROWS, NUM_COLUMNS, NUM_DFS)
def manual_concat(df_list):
columns = [col for df in df_list for col in df.columns]
columns = list(set(columns))
index = np.hstack([df.index.values for df in df_list])
df_list = [df.reindex(columns=columns) for df in df_list]
values = np.vstack([df.values for df in df_list])
return pd.DataFrame(values, index=index, columns=columns)
def compare_dataframes(df1, df2, margin=1e-4):
return abs(df1.fillna(0) - df2.fillna(0)[df1.columns]).sum().sum() < margin
def generate_dataframes(num_dfs, num_rows, num_cols, all_cols):
df_list = []
for i in range(num_dfs):
index = ['i%d'%i for i in range(i*num_rows, (i+1)*num_rows)]
columns = np.random.choice(all_cols, num_cols, replace=False)
values = np.random.uniform(-100, 100, [num_rows, num_cols])
df_list.append(pd.DataFrame(values, index=index, columns=columns))
return df_list
for ti, t in enumerate(test_cases):
num_rows = t[0]
num_cols = t[1]
num_dfs = t[2]
num_all_cols = num_cols * num_dfs * 4 // 5
all_cols = ['c%i'%i for i in range(num_all_cols)]
df_list = generate_dataframes(num_dfs, num_rows, num_cols, all_cols)
t1 = time.time()
df_pandas = pd.concat(df_list)
t2 = time.time()
df_manual = manual_concat(df_list)
t3 = time.time()
print('Testcase %d'%(ti + 1))
print('NUM_ROWS: %d, NUM_COLS: %d, NUM_DFS: %d'%(num_rows, num_cols, num_dfs))
print('Pandas: %.2f'%(t2-t1))
print('Manual: %.2f'%(t3-t2))
print(compare_dataframes(df_pandas, df_manual))
output:
Testcase 1
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.07
Manual: 0.01
True
Testcase 2
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.17
Manual: 0.01
True
Testcase 3
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.28
Manual: 0.03
True
Testcase 4
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.44
Manual: 0.05
True
Testcase 5
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.76
Manual: 0.06
True
Testcase 6
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 1.82
Manual: 0.13
True
Testcase 7
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.21
Manual: 0.28
True
Testcase 8
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 4.85
Manual: 0.44
True
Testcase 9
NUM_ROWS: 100, NUM_COLS: 50000, NUM_DFS: 3
Pandas: 4.21
Manual: 0.34
True
Testcase 10
NUM_ROWS: 100, NUM_COLS: 50000, NUM_DFS: 5
Pandas: 9.84
Manual: 0.77
True
Testcase 11
NUM_ROWS: 100, NUM_COLS: 50000, NUM_DFS: 7
Pandas: 16.49
Manual: 1.67
True
Testcase 12
NUM_ROWS: 100, NUM_COLS: 50000, NUM_DFS: 9
Pandas: 25.20
Manual: 2.77
True
Testcase 13
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.20
Manual: 0.15
True
Testcase 14
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.17
Manual: 0.02
True
Testcase 15
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.30
Manual: 0.04
True
Testcase 16
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.43
Manual: 0.06
True
Testcase 17
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.73
Manual: 0.08
True
Testcase 18
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 1.87
Manual: 0.21
True
Testcase 19
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.25
Manual: 0.40
True
Testcase 20
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 4.95
Manual: 0.62
True
Testcase 21
NUM_ROWS: 150, NUM_COLS: 50000, NUM_DFS: 3
Pandas: 4.26
Manual: 0.45
True
Testcase 22
NUM_ROWS: 150, NUM_COLS: 50000, NUM_DFS: 5
Pandas: 9.86
Manual: 1.23
True
Testcase 23
NUM_ROWS: 150, NUM_COLS: 50000, NUM_DFS: 7
Pandas: 17.29
Manual: 2.35
True
Testcase 24
NUM_ROWS: 150, NUM_COLS: 50000, NUM_DFS: 9
Pandas: 25.49
Manual: 3.79
True
Testcase 25
NUM_ROWS: 200, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.22
Manual: 0.25
True
Testcase 26
NUM_ROWS: 200, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.18
Manual: 0.03
True
Testcase 27
NUM_ROWS: 200, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.34
Manual: 0.05
True
Testcase 28
NUM_ROWS: 200, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.44
Manual: 0.08
True
Testcase 29
NUM_ROWS: 200, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.74
Manual: 0.10
True
Testcase 30
NUM_ROWS: 200, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 1.98
Manual: 0.26
True
Testcase 31
NUM_ROWS: 200, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.28
Manual: 0.49
True
Testcase 32
NUM_ROWS: 200, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 4.97
Manual: 0.78
True
Testcase 33
NUM_ROWS: 200, NUM_COLS: 50000, NUM_DFS: 3
Pandas: 4.44
Manual: 0.56
True
Testcase 34
NUM_ROWS: 200, NUM_COLS: 50000, NUM_DFS: 5
Pandas: 9.86
Manual: 1.43
True
Testcase 35
NUM_ROWS: 200, NUM_COLS: 50000, NUM_DFS: 7
Pandas: 17.43
Manual: 2.83
True
Testcase 36
NUM_ROWS: 200, NUM_COLS: 50000, NUM_DFS: 9
Pandas: 25.68
Manual: 5.25
True
Installed Versions
Prior Performance
No response
Comment From: topper-123
Your manual_concat
function does not give the same output as pd.concat
, in particualt the columns are ordered differently. To ensure the result, your compare_dataframes
should be:
def compare_dataframes(df1, df2):
return df1.equals(df2)
The ordering difference may be the reason for the different performance result, as ordering may costly performance-wise.
Can you redo your example where the columns ordering is the same as for pd.concat
, or alternatively, explain why the ordering in your example is better (faster) than the current ordering and if it can be incorporated in a backwards compatible manner.
Comment From: topper-123
Ok, I've redone your example below, where the result dataframes are actually equal (only code changes are where I've noted # this line was changed!
:
from itertools import product
import time
import numpy as np
import pandas as pd
NUM_ROWS = [100, 150, 200]
NUM_COLUMNS = [1000, 10000, 50000]
NUM_DFS = [3, 5, 7, 9]
test_cases = product(NUM_ROWS, NUM_COLUMNS, NUM_DFS)
def manual_concat(df_list):
columns = [col for df in df_list for col in df.columns]
columns = list(dict.fromkeys(columns)) # this line was changed!
index = np.hstack([df.index.values for df in df_list])
df_list = [df.reindex(columns=columns) for df in df_list]
values = np.vstack([df.values for df in df_list])
return pd.DataFrame(values, index=index, columns=columns)
def compare_dataframes(df1, df2, margin=1e-4):
return df1.equals(df2) # this line was changed!
def generate_dataframes(num_dfs, num_rows, num_cols, all_cols):
df_list = []
for i in range(num_dfs):
index = ['i%d'%i for i in range(i*num_rows, (i+1)*num_rows)]
columns = np.random.choice(all_cols, num_cols, replace=False)
values = np.random.uniform(-100, 100, [num_rows, num_cols])
df_list.append(pd.DataFrame(values, index=index, columns=columns))
return df_list
for ti, t in enumerate(test_cases):
num_rows = t[0]
num_cols = t[1]
num_dfs = t[2]
num_all_cols = num_cols * num_dfs * 4 // 5
all_cols = ['c%i'%i for i in range(num_all_cols)]
df_list = generate_dataframes(num_dfs, num_rows, num_cols, all_cols)
t1 = time.time()
df_pandas = pd.concat(df_list)
t2 = time.time()
df_manual = manual_concat(df_list)
t3 = time.time()
print('Testcase %d'%(ti + 1))
print('NUM_ROWS: %d, NUM_COLS: %d, NUM_DFS: %d'%(num_rows, num_cols, num_dfs))
print('Pandas: %.2f'%(t2-t1))
print('Manual: %.2f'%(t3-t2))
print(compare_dataframes(df_pandas, df_manual))
It confirms your results:
Testcase 1
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.20
Manual: 0.02
True
Testcase 2
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.12
Manual: 0.01
True
Testcase 3
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.22
Manual: 0.02
True
Testcase 4
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.35
Manual: 0.05
True
Testcase 5
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.83
Manual: 0.05
True
Testcase 6
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 2.08
Manual: 0.15
True
Testcase 7
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.18
Manual: 0.27
True
Testcase 8
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 4.83
Manual: 0.56
True
Testcase 9
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.12
Manual: 0.01
True
Testcase 10
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.40
Manual: 0.02
True
Testcase 11
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.23
Manual: 0.03
True
Testcase 12
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.58
Manual: 0.04
True
Testcase 13
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.78
Manual: 0.07
True
Testcase 14
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 2.12
Manual: 0.18
True
Testcase 15
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.30
Manual: 0.48
True
Testcase 16
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 5.12
Manual: 1.03
True
Of course there may be reasons why the native solution is slower (various checks, broader use case etc.) but this does warrant an examination IMO.
Can you ping back if you want to submit a PR for the issue. Else I'll take a look.
Comment From: bckim1318
ping @topper-123 OK I’ll work on it
Comment From: pdemarti
Not sure if related, but we are struggling mightily to get decent performance of concat
with pandas>=1.4.4
(tested through 1.5.3).
Even in the case where the frames are all entirely float64
, we see speed degradation in the hundred-fold when some frames have been reindexed such that columns of NaN
s appear (by contrast to an underlying contiguous numpy
array with some columns being NaN
).
Here is a somewhat simpler setup than the one suggested in this ticket.
Setup
import pandas as pd
import numpy as np
>>> pd.__version__
'1.5.3'
np.random.seed(0) # reproducible example
n, m = 100, 200
k = 10
frames0 = [
pd.DataFrame(np.random.uniform(size=(n, m)),
columns=[f'c{i:04d}' for i in range(m)])
for _ in range(k)
]
# one mask per frame: where mask is True: no change; False: NaN or remove
masks = [np.random.randint(0, 2, m, dtype=bool) for _ in range(k)]
# null out entire columns
frames1 = [
df.mul(np.where(mask, 1, np.nan))
for df, mask in zip(frames0, masks)
]
# remove entire columns, then re-index
frames2 = [
df.iloc[:, mask].reindex_like(df)
for df, mask in zip(frames0, masks)
]
Timings
The absolute timing values don't matter, but their ratios do. FWIW, that was run on a r5.2xlarge
EC2 instance.
Using IPython's lime-magic %timeit
:
dt0 = %timeit -o pd.concat(frames0)
# 707 µs ± 862 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
dt1 = %timeit -o pd.concat(frames1)
# 886 µs ± 643 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
dt2 = %timeit -o pd.concat(frames2)
# 453 ms ± 3.62 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> dt2.best / dt1.best
509.53007324284977
# and, ironically
dt3 = %timeit -o pd.concat([df.copy() for df in frames2])
# 2.31 ms ± 23.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Equalities:
>>> all([df1.equals(df2) for df1, df2 in zip(frames1, frames2)])
True
>>> pd.concat(frames1).equals(pd.concat(frames2))
True
With timeit
instead (easier to add to an integration test suite):
import timeit
results = [
timeit.Timer(
'pd.concat(frames)',
'import pandas as pd',
globals={'frames': frames},
).autorange()
for frames in [frames0, frames1, frames2]
]
results = [t / n for n, t in results]
>>> results[2] / results[1]
473.753
The results are similar for the versions I checked: 1.4.4, 1.5.2 and 1.5.3.
However, for 1.4.2, the seemed much better (next comment).
Comment From: pdemarti
With pandas=1.4.2
, we get, for the same test:
>>> pd.__version__
'1.4.2'
dt0 = %timeit -o pd.concat(frames0)
# 676 µs ± 6.12 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
dt1 = %timeit -o pd.concat(frames1)
# 674 µs ± 780 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
dt2 = %timeit -o pd.concat(frames2)
# 17.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> dt2.best / dt1.best
25.59028179864691
# and, ironically
dt3 = %timeit -o pd.concat([df.copy() for df in frames2])
# 3.33 ms ± 21.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Comment From: pdemarti
We implemented a quick fix for homogeneous frames that is similar to the one proposed by @bckim1318:
from functools import reduce
def np_concat(frames):
a = np.vstack([df.to_numpy() for df in frames])
cols = frames[0].columns
ix = reduce(lambda x, y: x.append(y), [df.index for df in frames])
# this also gets the index name if they are all consistent
return pd.DataFrame(a, index=ix, columns=cols)
assert np_concat([df for df in frames2]).equals(pd.concat(frames1))
dt4 = %timeit -o np_concat([df for df in frames2])
# 509 µs ± 3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
While it is much better in performance, it of course causes issues with heterogeneous frames. For instance:
df = pd.DataFrame({'a': [1, 7], 'b': ['foo', 'bar'], 'c': [1.2, 2.3]})
>>> vars(df)
{'_is_copy': None,
'_mgr': BlockManager
Items: Index(['a', 'b', 'c'], dtype='object')
Axis 1: RangeIndex(start=0, stop=2, step=1)
NumericBlock: slice(0, 1, 1), 1 x 2, dtype: int64
ObjectBlock: slice(1, 2, 1), 1 x 2, dtype: object
NumericBlock: slice(2, 3, 1), 1 x 2, dtype: float64, # 3 distinct blocks (nice)
'_item_cache': {},
'_attrs': {},
'_flags': <Flags(allows_duplicate_labels=True)>}
But:
>>> df.to_numpy()
array([[1, 'foo', 1.2],
[7, 'bar', 2.3]], dtype=object) # numpy can only be homogeneous
And thus:
>>> vars(np_concat([df]))
{'_is_copy': None,
'_mgr': BlockManager
Items: Index(['a', 'b', 'c'], dtype='object')
Axis 1: RangeIndex(start=0, stop=2, step=1)
ObjectBlock: slice(0, 3, 1), 3 x 2, dtype: object, # single object block
'_item_cache': {},
'_attrs': {},
'_flags': <Flags(allows_duplicate_labels=True)>}
Which may affect downstream operations.
That said, df.T.T
also collapses the frame to a homogeneous single "object" block:
>>> vars(df.T.T)
{'_is_copy': None,
'_mgr': BlockManager
Items: Index(['a', 'b', 'c'], dtype='object')
Axis 1: RangeIndex(start=0, stop=2, step=1)
ObjectBlock: slice(0, 3, 1), 3 x 2, dtype: object,
'_item_cache': {},
'_attrs': {},
'_flags': <Flags(allows_duplicate_labels=True)>}
Comment From: topper-123
Thanks for the testing @pdemarti, your examples are very clear.
Shame that this is difficult to fix. Can you do a bisect on the pandas git repo to pinpoint the PR where this performance degradation happened? That could be the best next step to fix this.
I will take a look at my PR over the coming days (https://github.com/pandas-dev/pandas/pull/51419). You're of course also very welcome to take a look/propose commit to that PR.
Comment From: pdemarti
I did bisect, @topper-123 : this first commit where the ratio above jumps from 31 to 690 is ad7dc56b53. The commit comment indicates work in that area:
| * ad7dc56b53 - (9 months ago) Backport PR #47372 on branch 1.4.x (REGR: revert behaviour change for concat with empty/all-NaN data) (#47472)
The commit immediately before that, 30a3b98a, gives a ratio of 34.91, similar to v1.4.2
.
Personally, I think the ratio should be close 1.0. In fact, by simply doing a df.copy()
for the frames before concat
, you get a perf. ratio of 1.02 instead of 30 (v1.4.2) or 690 (v1.4.3).
Comment From: jbrockmendel
I've spent some time tracking this down, recording some preliminary observations here:
1) We lost a decent amount of perf in #47372 that we'll get back if/when we un-revert it, which may need a deprecation cycle. 2) _maybe_reindex_columns_na_proxy is breaking single-block Managers into many-block Managers, which has a huge subsequent performance hit. 3) The manual concat in the OP gives differently-sorted results than the pandas concat 4) _combine_concat_plans is a mess