I came across some confusing/inconsistent behaviour while making adjustments to geopandas to bring it more in line with pandas' behaviour.

It seems that when using a custom search path, inserting a table will fail if a table with that name exists in a schema which is further down the search_path than the first schema. This may seem a bit complex, but put simply, the search_path controls the order in which SELECT queries look in schemas to find tables with the matching name, if they are specified without an explicit schema identifier. So if your session's search_path has been set to first,second, a query like SELECT * FROM tbl; will look for a table called first.tbl, and if nothing is found, will look for a table called second.tbl, and so on. CREATE queries will just use the first item in the search_path.

So, let' make a connection and set up two schemas, first and second, using a search_path of first,second, and then put some data into a table called "test" in the second schema:

import pandas as pd
import sqlalchemy
from sqlalchemy.engine.url import URL

con = sqlalchemy.create_engine(
    URL.create(
        drivername="postgresql+psycopg2",
        username=user,
        database=dbname,
        password=password,
        host=host,
        port=port,
    ),
    connect_args={'options': '-csearch_path=first,second,public'}  # Set search_path at session level
)
con.begin()

# Set up schemas
con.execute("drop schema if exists first cascade; create schema first;")
con.execute("drop schema if exists second cascade; create schema second;")

# Import the initial table into "second" schema for the sake of demonstration
data = [['tom', 10], ['nick', 15], ['juli', 14]]
df = pd.DataFrame(data, columns = ['Name', 'Age'])
df.to_sql("test", con, schema="second")

````

Now let's try calling `.to_sql("test", con)`, in various ways:

```python
df.to_sql("test", con, if_exists="append")  # Data is appended to the table in "second" schema
df.to_sql("test", con, if_exists="replace")  # Throws sqlalchemy.exc.InvalidRequestError
df.to_sql("test", con, if_exists="fail")  # Throws ValueError

Using if_exists="append" appends the data to the table in the second schema, which may be intended behaviour, I don't really know. Using if_exists="replace", however, fails, which would not be consistent. if_exists="fail" fails, which is consistent with the first behaviour.

If the table does not exist, then it is created in the first schema, as expected:

con.execute("drop table second.test;")
df.to_sql("test", con)  # Table appears in "first" schema

So what is the intended behaviour here? Appending to a lower-priority table (according the the search_path) is how it works in a PostgreSQL shell. But trying to create a new table with the same name would put that table in the highest-priority schema in the search_path:

set search_path to first,second;
create table second.test(id integer);
insert into test values (0);  -- inserts into second.test (same behaviour as pandas)
create table test(id integer);  -- creates in first schema (not same behaviour as pandas, pandas throws sqlalchemy.exc.InvalidRequestError or ValueError as if it was trying to replace the table in the 2nd schema)

Why don't you just use the schema parameter`?

Indeed that squashes all ambiguity. But it is possible to receive a SQLalchemy connection object to work with that may have a very precisely-configured search_path, whether it is set at the session level or the role level. It could very well be that it is desired to have the python script work using a specific search_path that is not explicitly known ahead of time, sort of like the script running in a kind of "context" which is defined by the search_path. I know it is contrived, but I have a workflow which acts exactly like this, and it is what prompted me to start investigating this more deeply, and I found this behaviour to actually be rather inconsistent and troubling, and I don't believe that simply specifying the schema parameter is a sufficient solution.