Code Sample, a copy-pastable example if possible
import io
import pandas
csv_file = io.StringIO("""datetime,cloudcover_eddh,dewpoint,dewpoint_eddh,humidity,humidity_eddh,lat,lon,precipitation_eddh,pressure_eddh,temperature,temperature_eddh,winddirection_eddh,windgust_eddh,windspeed_eddh
2016-01-01 00:00:00,CAVOC,4.1,3.0,100.0,94.0,53.5443,9.926839999999999,,1023.0,4.1,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,1.7,3.0,96.0,94.0,53.61297,9.98145,,1023.0,2.3,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,2.4,3.0,98.0,94.0,53.57735,10.09428,,1023.0,2.7,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.0,3.0,94.0,94.0,53.68849,10.1335,,1023.0,3.9,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,4.2,3.0,76.0,94.0,53.6608,10.06555,,1023.0,,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.0,3.0,100.0,94.0,53.43252,10.297989999999999,,1023.0,3.0,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,1.9,3.0,92.0,94.0,53.68937,10.13025,,1023.0,3.1,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.5,3.0,100.0,94.0,53.6344,9.966560000000001,,1023.0,3.5,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.6,3.0,99.0,94.0,53.46402,9.89157,,1023.0,3.7,3.0,160.0,0.0,7.2""")
data_df = pandas.read_csv(csv_file, parse_dates=["datetime"], index_col="datetime")
cloud_cover_df = pandas.get_dummies(data_df.cloudcover_eddh, prefix="cloudcover_eddh")
df_hour = pandas.get_dummies(data_df.index.hour, prefix="hour")
data_df.reset_index(inplace=True, drop=True)
data_df = pandas.concat([
data_df,
df_hour,
cloud_cover_df
], axis=1, ignore_index=True)
Problem description
I expect that the three dataframes are concatenated as wished. I know that the indices correspond so I do not need pandas to check that for me. Instead I get the following error message:
```Traceback (most recent call last):
File "
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.10.0-33-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: None
pip: 9.0.1
setuptools: 36.0.1
Cython: None
numpy: 1.13.1
scipy: 0.17.0
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.6.2
feather: None
matplotlib: 1.5.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 0.7.3
lxml: None
bs4: 4.4.1
html5lib: 0.999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.8
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
**Comment From: gfyoung**
@1kastner : Thanks for reporting this! It appears that the `cloud_cover_df` table is causing issues because the index is not aligned with that of the other two.
I'm not sure if the concatenation should work (@jreback ?), but that being said, better error messages should be raised if that's the case.
**Comment From: jorisvandenbossche**
Normally, not aligned indices work fine (you just get a lot of NaNs):
In [46]: df1 = pd.DataFrame(np.zeros((2,2)), index=[0, 1], columns=['a', 'b'])
In [47]: df2 = pd.DataFrame(np.ones((2,2)), index=[2, 3], columns=['c', 'd'])
In [48]: pd.concat([df1, df2], axis=1) Out[48]: a b c d 0 0.0 0.0 NaN NaN 1 0.0 0.0 NaN NaN 2 NaN NaN 1.0 1.0 3 NaN NaN 1.0 1.0
In [49]: df2 = pd.DataFrame(np.ones((2,2)), index=[pd.Timestamp(1), pd.Timestamp(2)], columns=['c', 'd'])
In [50]: pd.concat([df1, df2], axis=1) Out[50]: a b c d 0 0.0 0.0 NaN NaN 1 0.0 0.0 NaN NaN 1970-01-01 00:00:00.000000001 NaN NaN 1.0 1.0 1970-01-01 00:00:00.000000002 NaN NaN 1.0 1.0
So not sure what is different in the original example.
@1kastner sidenote: I think you are misunderstanding what `ignore_index` does. It is not meant to say to ignore the index values on the axis you are concatting on (the index in your case), but it is meant to replace the concatenated axis (the columns) with default values (0, 1, ..). Which is not what you want I think (but this is not related to the actual bug here).
**Comment From: jorisvandenbossche**
OK, so the cause is the fact that the index is uniform (a single value repeated), and depending on the dtype of the index, you get a different error message:
In [113]: df1 = pd.DataFrame(np.zeros((2,2)), index=[0, 1], columns=['a', 'b']) ...: df2 = pd.DataFrame(np.ones((2,2)), index=[2, 2], columns=['c', 'd'])
In [114]: pd.concat([df1, df2], axis=1) ... ValueError: Shape of passed values is (4, 6), indices imply (4, 4)
In [115]: df1 = pd.DataFrame(np.zeros((2,2)), index=[0, 1], columns=['a', 'b']) ...: df2 = pd.DataFrame(np.ones((2,2)), index=pd.DatetimeIndex([2, 2]), columns=['c', 'd'])
In [116]: pd.concat([df1, df2], axis=1) ... TypeError: 'NoneType' object is not iterable
Closing this as a duplicate of https://github.com/pandas-dev/pandas/issues/6963. It seems that `concat` cannot handle indices with duplicates and different values (if there are no different values in the indices, then duplicates are handled fine). It is not exactly the same error message as in that issue (the error message depends on which axis you are concatting), but the idea is the same.
@1kastner I think the reason you get this, as I said above, is from misunderstanding what `ignore_index` is supposed to do. If you know all 3 frames are aligned, and you don't care about the actual index values, what you can do is use `reset_index`:
pd.concat([data_df, df_hour, cloud_cover_df.reset_index()], axis=1)
**Comment From: jorisvandenbossche**
Duplicate of https://github.com/pandas-dev/pandas/issues/6963
**Comment From: 1kastner**
Thanks for the feedback, the ignore_index thing was a bit confusing, I will try to add some information to the documentation as this behavior was a bit surprising for me. The rest of the analysis seems valid as the dataframe resulting from the get_dummies invocation has a different-looking index.
The solution I chose was to add the lines
```python
data_df.reset_index(inplace=True, drop=True)
df_hour.reset_index(inplace=True, drop=True)
cloud_cover_df.reset_index(inplace=True, drop=True)
just before the concatenation from which I removed the ignore_index keyword. So now the complete code looks like:
import io
import pandas
csv_file = io.StringIO("""datetime,cloudcover_eddh,dewpoint,dewpoint_eddh,humidity,humidity_eddh,lat,lon,precipitation_eddh,pressure_eddh,temperature,temperature_eddh,winddirection_eddh,windgust_eddh,windspeed_eddh
2016-01-01 00:00:00,CAVOC,4.1,3.0,100.0,94.0,53.5443,9.926839999999999,,1023.0,4.1,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,1.7,3.0,96.0,94.0,53.61297,9.98145,,1023.0,2.3,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,2.4,3.0,98.0,94.0,53.57735,10.09428,,1023.0,2.7,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.0,3.0,94.0,94.0,53.68849,10.1335,,1023.0,3.9,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,4.2,3.0,76.0,94.0,53.6608,10.06555,,1023.0,,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.0,3.0,100.0,94.0,53.43252,10.297989999999999,,1023.0,3.0,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,1.9,3.0,92.0,94.0,53.68937,10.13025,,1023.0,3.1,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.5,3.0,100.0,94.0,53.6344,9.966560000000001,,1023.0,3.5,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.6,3.0,99.0,94.0,53.46402,9.89157,,1023.0,3.7,3.0,160.0,0.0,7.2""")
data_df = pandas.read_csv(csv_file, parse_dates=["datetime"], index_col="datetime")
cloud_cover_df = pandas.get_dummies(data_df.cloudcover_eddh, prefix="cloudcover_eddh")
df_hour = pandas.get_dummies(data_df.index.hour, prefix="hour")
data_df.reset_index(inplace=True, drop=True)
cloud_cover_df.reset_index(inplace=True, drop=True)
df_hour.reset_index(inplace=True, drop=True)
data_df = pandas.concat([
data_df,
df_hour,
cloud_cover_df
], axis=1)
That way I could enter all the dummies back into the dataframe and have all columns I wanted, i. e.:
dewpoint_eddh humidity_eddh lat lon pressure_eddh \
temperature_eddh winddirection_eddh windgust_eddh windspeed_eddh \
hour_0 hour_1 hour_2 hour_3 hour_4 hour_5 hour_6 hour_7 hour_8 \
hour_9 hour_10 hour_11 hour_12 hour_13 hour_14 hour_15 hour_16 \
hour_17 hour_18 hour_19 hour_20 hour_21 hour_22 hour_23 \
cloudcover_eddh_0 cloudcover_eddh_1 cloudcover_eddh_2 cloudcover_eddh_3 \
cloudcover_eddh_4 cloudcover_eddh_5 temperature
It might be a special case that I want to drop the index (the row called "datetime") but overall it can be said that get_dummies is not that convenient for the described situation.
Comment From: jorisvandenbossche
Improvement to the docs is always welcome!
Comment From: 1kastner
Another improvement as I just want to add some columns and I know that there are duplicated indices which does not work well with the logic of concatenating (joining on sth.)
data_df = data_df.assign(**{column: cloud_cover_df[column] for column in cloud_cover_df.columns})