Code Sample

import pandas as pd
import numpy as np

dates = pd.date_range('1/1/2000', periods=8)
data1 = {'A': np.arange(8), 
         'B': np.random.randn(8)
        }
df1 = pd.DataFrame(data1, index=dates)

data2 = {'A': np.arange(8),
         'C': np.random.randn(8)
        }
df2 = pd.DataFrame(data2, index=dates)

df = pd.concat([df1, df2], axis=1, ignore_index=False)
print(df1)
            A         B
2000-01-01  0  0.756225
2000-01-02  1  0.537196
2000-01-03  2  0.813128
2000-01-04  3  0.819868
2000-01-05  4 -0.653656
2000-01-06  5  1.246314
2000-01-07  6 -0.703276
2000-01-08  7 -0.743086
print(df2)
            A         C
2000-01-01  0 -1.584052
2000-01-02  1 -0.804060
2000-01-03  2 -1.350531
2000-01-04  3 -0.762914
2000-01-05  4 -0.589009
2000-01-06  5  0.395458
2000-01-07  6 -0.124446
2000-01-08  7  0.100183

df1 and df2 both contain column A (which is not an index column), so the unique column names are:

print(df.columns.unique())
['A' 'B' 'C']

When applying the below selection (assuming that columns with the same name contain the same values - which I know is true in this use case however may not always be true), the duplicate columns are kept.

print(df[df.columns.unique()])
            A  A         B         C
2000-01-01  0  0  0.086386 -2.558181
2000-01-02  1  1  0.217020 -0.964484
2000-01-03  2  2 -0.426538 -0.579129
2000-01-04  3  3  0.620989 -1.908091
2000-01-05  4  4 -1.463473 -0.345109
2000-01-06  5  5 -0.460185  0.670077
2000-01-07  6  6 -1.353535  0.226513
2000-01-08  7  7  0.329263  2.274627

Expected Output

A drop_duplicates for columns where for example the first duplicate column name is kept.

print(df.T.drop_duplicates().T)
              A         B         C
2000-01-01  0.0  0.086386 -2.558181
2000-01-02  1.0  0.217020 -0.964484
2000-01-03  2.0 -0.426538 -0.579129
2000-01-04  3.0  0.620989 -1.908091
2000-01-05  4.0 -1.463473 -0.345109
2000-01-06  5.0 -0.460185  0.670077
2000-01-07  6.0 -1.353535  0.226513
2000-01-08  7.0  0.329263  2.274627

output of pd.show_versions()

INSTALLED VERSIONS
------------------
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 3.19.0-59-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_AU.UTF-8

pandas: 0.18.1
nose: None
pip: 8.1.2
setuptools: 23.0.0
Cython: 0.24
numpy: 1.11.1
scipy: 0.17.1
statsmodels: None
xarray: None
IPython: 5.0.0
sphinx: None
patsy: None
dateutil: 2.5.3
pytz: 2016.6.1
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.12
pymysql: 0.7.3.None
psycopg2: 2.6.1 (dt dec pq3 ext)
jinja2: 2.8
boto: None
pandas_datareader: None

Comment From: jorisvandenbossche

@gregsifr This is the expected behaviour. If you index with a key, you will get all matches for that key. The same is eg true for a series:

In [36]: s = pd.Series(range(4), index=['a', 'b', 'a', 'c'])

In [37]: s['b']
Out[37]: 1

In [38]: s['a']
Out[38]:
a    0
a    2
dtype: int64

In this case, you have identical columns, but this is not necessarily the case, and how should pandas know which column to pick?

If you want to get certain columns, you can also use iloc to pick the exact column you want (but then you will have to decide which of the duplicate columns you want).

Comment From: sinhrks

@gregsifr I may misunderstood, but you simply request a method like df.drop_duplicates(axis=1) (can ignore concat part)?

Comment From: gregsifr

@sinhrks Yes a df.drop_duplicates(axis=1) method would address this.

Comment From: jorisvandenbossche

The df.drop_duplicates(axis=1) enhancement request is covered by https://github.com/pydata/pandas/issues/11250, so closing here.