Seen this in a few PRs:

https://github.com/pandas-dev/pandas/pull/39153/checks?check_run_id=1697646268

________________________ TestS3.test_write_s3_csv_fails ________________________
[gw1] linux -- Python 3.7.9 /usr/share/miniconda/envs/pandas-dev/bin/python

self = <pandas.tests.io.parser.test_network.TestS3 object at 0x7feea42e2610>
tips_df =      total_bill   tip     sex smoker   day    time  size
0         16.99  1.01  Female     No   Sun  Dinner     2
1   ....75    Male     No   Sat  Dinner     2
243       18.78  3.00  Female     No  Thur  Dinner     2

[244 rows x 7 columns]
s3so = {'client_kwargs': {'endpoint_url': 'http://127.0.0.1:5551/'}}

    def test_write_s3_csv_fails(self, tips_df, s3so):
        # GH 32486
        # Attempting to write to an invalid S3 path should raise
        import botocore

        # GH 34087
        # https://boto3.amazonaws.com/v1/documentation/api/latest/guide/error-handling.html
        # Catch a ClientError since AWS Service Errors are defined dynamically
        error = (FileNotFoundError, botocore.exceptions.ClientError)

        with pytest.raises(error, match="The specified bucket does not exist"):
            tips_df.to_csv(
>               "s3://an_s3_bucket_data_doesnt_exit/not_real.csv", storage_options=s3so
            )
E           Failed: DID NOT RAISE (<class 'FileNotFoundError'>, <class 'botocore.exceptions.ClientError'>)

pandas/tests/io/parser/test_network.py:224: Failed
______________________ TestS3.test_write_s3_parquet_fails ______________________
[gw1] linux -- Python 3.7.9 /usr/share/miniconda/envs/pandas-dev/bin/python

self = <pandas.tests.io.parser.test_network.TestS3 object at 0x7feea435c790>
tips_df =      total_bill   tip     sex smoker   day    time  size
0         16.99  1.01  Female     No   Sun  Dinner     2
1   ....75    Male     No   Sat  Dinner     2
243       18.78  3.00  Female     No  Thur  Dinner     2

[244 rows x 7 columns]
s3so = {'client_kwargs': {'endpoint_url': 'http://127.0.0.1:5551/'}}

    @td.skip_if_no("pyarrow")
    def test_write_s3_parquet_fails(self, tips_df, s3so):
        # GH 27679
        # Attempting to write to an invalid S3 path should raise
        import botocore

        # GH 34087
        # https://boto3.amazonaws.com/v1/documentation/api/latest/guide/error-handling.html
        # Catch a ClientError since AWS Service Errors are defined dynamically
        error = (FileNotFoundError, botocore.exceptions.ClientError)

        with pytest.raises(error, match="The specified bucket does not exist"):
            tips_df.to_parquet(
                "s3://an_s3_bucket_data_doesnt_exit/not_real.parquet",
>               storage_options=s3so,
            )
E           Failed: DID NOT RAISE (<class 'FileNotFoundError'>, <class 'botocore.exceptions.ClientError'>)

Comment From: phofl

The test is failing since s3fs got upgraded to 0.5.2 in our pipelines. On master this was first the case for 396131a1d8bf6072e0d3c9c415928ede38882509. The commits before that had 0.4.2. I don't know why this happend yesterday since the packages seems to be up on conda forge for a month (https://anaconda.org/conda-forge/s3fs/files) but installing a new environment and downgrading s3fs to 0.4.2 locally causes the test to pass while it fails with 0.5.2

Comment From: phofl

I am removing unreliable test, since this is a dependency issue

Comment From: phofl

We are somehow pulling s3fs=0.4.2 again, so tests are passing now, meaning xfail causes bugs.

Comment From: jbrockmendel

is there a relevant issue on the s3fs tracker?

Comment From: jbrockmendel

@martindurant looks like you did a lot of work on these tests. any idea how to fix them now?

Comment From: mroeschke

I think these have been fixed by running these tests in single cpu builds so closing