Code Sample, a copy-pastable example if possible

import pandas as pd

data = [1, 2, 3]
df = pd.DataFrame()
s3_url = 's3://my_bucket/path/to/file.parquet'
df.to_parquet(s3_url)

Problem description

I am trying to upload a Panda's dataframe as a new file in s3. I am getting FileNotFoundError since the s3 file is opened for read instead of write. Flow:

# frame.py:to_parquet():to_parquet(...)
# parquet.py:to_parquet():impl.write(...)
# parquet.py:write():get_filepath_or_buffer(...)
# parquet.py:get_filepath_or_buffer():s3.get_filepath_or_buffer(...)
# s3.py:get_filepath_or_buffer():fs.open(...)
# core.py():open():return S3File(mode='rb', ...)
...

It looks as if s3fs tried to read the file instead of write to it.

Another issue is that I can't specify AWS profile to use. See https://stackoverflow.com/questions/48486209/specify-aws-profile-name-to-use-when-uploading-pandas-dataframe-to-s3

Expected Output

A new file should be created.

Output of pd.show_versions()

>>> pd.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.6.2.final.0 python-bits: 64 OS: Darwin OS-release: 16.7.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.22.0 pytest: 3.3.2 pip: 9.0.1 setuptools: 28.8.0 Cython: 0.27.3 numpy: 1.14.0 scipy: 1.0.0 pyarrow: 0.8.0 xarray: None IPython: None sphinx: None patsy: 0.5.0 dateutil: 2.6.1 pytz: 2017.3 blosc: None bottleneck: None tables: None numexpr: None feather: None matplotlib: None openpyxl: None xlrd: None xlwt: None xlsxwriter: None lxml: None bs4: None html5lib: None sqlalchemy: 1.2.1 pymysql: None psycopg2: None jinja2: None s3fs: 0.1.2 fastparquet: 0.1.3 pandas_gbq: None pandas_datareader: None

Comment From: jorisvandenbossche

This has been recently implemented, see https://github.com/pandas-dev/pandas/issues/19134 and https://github.com/pandas-dev/pandas/pull/19135. Welcome to try out master!