Code Sample

import pandas as pd


freq = '1D'
# this fails
N_days = 360
# this works
# N_days = 30
N_choices = 30
N_rows = 1000
seed = 0


random_state = pd.np.random.RandomState(seed)


df = pd.DataFrame(dict(
    k=random_state.randint(N_choices, size=N_rows),
    t=pd.Series(pd.DatetimeIndex(['2015-01-01']*N_rows)) +
      pd.Series([pd.DateOffset(el)  # noqa
                 for el in random_state.randint(N_days, size=N_rows)])
    ))
gb = df.groupby(pd.Grouper(key='t', freq=freq))
left = gb.k.nunique()
right = gb.apply(lambda df: df.k.nunique())
# this shows that gb.k.nunique() gives the wrong answer
print(df[df.t == df.t.min()])
print(left.head())
print(right.head())
assert left.equals(right)

Problem description

These answers are not the same but they should be. gb.k.nunique gives an incorrect answer.

Expected Output

expect assert left.equals(right) to not fail

Output of pd.show_versions()

In [1]: pd.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.6.0.final.0 python-bits: 64 OS: Linux OS-release: 4.8.0-56-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.19.2 nose: 1.3.7 pip: 9.0.1 setuptools: 27.2.0 Cython: 0.25.2 numpy: 1.11.3 scipy: 0.18.1 statsmodels: 0.6.1 xarray: None IPython: 5.1.0 sphinx: 1.5.1 patsy: 0.4.1 dateutil: 2.6.0 pytz: 2016.10 blosc: None bottleneck: 1.2.0 tables: 3.3.0 numexpr: 2.6.1 matplotlib: 2.0.0 openpyxl: 2.4.1 xlrd: 1.0.0 xlwt: 1.2.0 xlsxwriter: 0.9.6 lxml: 3.7.2 bs4: 4.5.3 html5lib: None httplib2: None apiclient: None sqlalchemy: 1.1.5 pymysql: None psycopg2: None jinja2: 2.9.4 boto: 2.45.0 pandas_datareader: None

Comment From: dsm054

Could you try this in a more recent pandas? It seems to work for me in 0.20.2.

Comment From: sumdan

Confirm that the above snippet does not raise with 0.20.2

FYI: I think that 0.19.2 comes down with anaconda 4.3.1

Comment From: jreback

this was fixed in 0.20.1, closed by https://github.com/pandas-dev/pandas/commit/5a8883b965610234366150897fe8963abffd6a7c

@sumdan you can always update pandas easily.