Code Sample, a copy-pastable example if possible
import pandas as pd
df = pd.DataFrame({'a': ['x','x','y'], 'b': [0,1,0], 'c': [7,8,9]})
print(df.groupby(['a','b']).mean().reset_index())
df['a'] = df['a'].astype('category')
print(df.groupby(['a','b']).mean().reset_index())
Returns two different results:
a b c
0 x 0 7
1 x 1 8
2 y 0 9
a b c
0 x 0 7.0
1 x 1 8.0
2 y 0 9.0
3 y 1 NaN
Problem description
Performing a groupby with a categorical type returns all combination of the groupby columns. This is a problem in my actual application as it results in a massive dataframe that is mostly filled with nans. I would also prefer not to move off of category dtype since it provides necessary memory savings.
Expected Output
a b c
0 x 0 7
1 x 1 8
2 y 0 9
a b c
0 x 0 7
1 x 1 8
2 y 0 9
Output of pd.show_versions()
Comment From: jreback
this is by-definition on what groupby of categorical is/does. Categorical is serving 2 masters acting as a 'real' categorical and providing some memory savings.
Comment From: bear24rw
What is an example usecase for a groupby aggregation returning all combinations with rows full of nans? It seems non intuitive that the dtype of a column would change the groupby behavior. It took me a while to figure out why I was getting nan rows.
Comment From: jreback
It seems non intuitive that the dtype of a column would change the groupby behavior.
its not, groupby gives you observed values, here the observed values are by definition the categories. if you don't want this, then just change your grouper.
Comment From: alejcas
But... When working with big datasets we use categoricals to use less memory (without categoricals the data load just crashes with a memory error)... I am just using them for the memory issue. But now I have figured out I can not groupby as expected.
Can't use object type because it will blast my pc memory.
What can I do? any ideas?
Comment From: alejcas
@jreback how can I change this behaviour changing the Grouper? I don't see how in the docs: pandas.Grouper
Comment From: jreback
In [11]: df.groupby([df.a.astype(object), 'b']).mean()
Out[11]:
c
a b
x 0 7
1 8
y 0 9
Comment From: alejcas
@jreback this defeats the purpose of using categoricals in my use case... Again I can't convert to object because I hit memory error.
Using categoricals was the perfect answer to avoid memory errors... only if groupby worked as expected. Grouping with 5 categorical columns results in a exploding dataframe, as every possible combination is done even when there is no data at all for 98% of the combinations!
I understand that how categoricals works on groupby operations is correct, but there must be a solution to our problem. That is: using categoricals ONLY to avoid memory errors, and having the same behaviour as object like columns.
I appreciate your answers! Thanks!
Comment From: jreback
@janscas this was never the purpose of categoricals. sure you can use them to save memory and folks do.
I don't know of an elegant solution here, you can do something like
I wouldn't object to having a function do this (maybe via a keyword on groupby)
In [13]: result = df.groupby([df.a.cat.codes, 'b']).mean()
In [14]: result.index = result.index.set_levels(df.a.cat.categories, level=0)
In [15]: result
Out[15]:
c
b
x 0 7
1 8
y 0 9
Comment From: bear24rw
@janscas this was never the purpose of categoricals. sure you can use them to save memory and folks do.
The first bullet of the categorical documentation advertises its use for memory saving:
The categorical data type is useful in the following cases:
A string variable consisting of only a few different values. Converting such a string variable to a categorical variable will save some memory, see here.
So if it's not primarily for memory savings, what is its primary usecase?
Additionally, no where can I find an example of why you would want categoricals to expand after a groupby. Is it a side effect of some other desirable behavior?
Comment From: alejcas
I wouldn't object to having a function do this (maybe via a keyword on groupby)
@jreback I would greatly appreciate this. I think much more people out there needs this.
Thanks for the solution provided, I'm going to test it
Comment From: batterseapower
I keep getting bitten by this special case. It's just really surprising that groupby works differently for categoricals specifically. Just like @janscas I'm using categoricals for memory savings as advised by the docs, but I periodically try to groupby a categorical column and blow up my memory because pandas wants to generate a result filled with tons of NaNs.
It's a bit weird that pandas tries to "guess" what the observed values are from the categories rather than just using the data it was given. For example, let's say I was studying diet, I might do something like this:
df = pd.DataFrame.from_items([
('food', pd.Categorical(['BURGER', 'CHIPS', 'SALAD', 'RICE'] * 2)),
('day', ([0] * 4) + ([1] * 4)),
('weight', np.arange(8)),
])
df2 = df[df['food'].isin({'BURGER', 'CHIPS'})]
print(df2.groupby(['day', 'food']).mean())
When I work with df2 I'm studying junk food specifically, but Pandas insists on telling me about the healthy foods I explicitly filtered out.
What's even weirder is that the behaviour changes depending on how many column your group-by result has! With pandas 0.22.0 the following alternative groupby operation does not include the extra NaN rows for SALAD and RICE:
print(df2.groupby(['day', 'food'])['weight'].mean())
This seems like a bug?
Comment From: mattharrison
Not meaning to pile on but this behavior was completely unexpected.
Where is the defined behavior of category, ie:
this is by-definition on what groupby of categorical is/does. Categorical is serving 2 masters acting as a 'real' categorical and providing some memory savings.
Perhaps an option to enable|disable cartesian products could be added to the .groupby
method
Comment From: jreback
this is exactly what observed does
Comment From: mattharrison
Oh, the observed
parameter in .groupby
! Awesome, thanks!
I would prefer it to default to True
and also work with any grouper.
Comment From: JacobHayes
Would a PR changing observed=True
as the default be considered for/before pandas 1.0.0? That seems to more closely match expectations (and most intentions). I'd be happy to submit that.
Comment From: jreback
1.0.0rc1 has been out for a few weeks so no it won’t be considered; it’s possible to add a depreciation that it would change in the future for 1.1
Comment From: JacobHayes
Thanks @jreback, that makes sense. I'll take a look into a deprecation.