Pandas version checks
-
[X] I have checked that this issue has not already been reported.
-
[X] I have confirmed this bug exists on the latest version of pandas.
-
[ ] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
import pandas as pd
from io import StringIO
# With single row of data, raises a ParserError:
pd.read_csv(StringIO("1,2\n"), names=['a', 'b', 'c'])
# With two rows of data, does NOT raise a ParseError ("c" column values are NaN):
pd.read_csv(StringIO("1,2\n4,5\n"), names=['a', 'b', 'c'])
Issue Description
In pandas 1.3.4, the following code would parse the dataframe as a single row without issue, with the value in column c
as NaN
:
import pandas as pd
from io import StringIO
pd.read_csv(StringIO("1,2\n"), names=['a', 'b', 'c'])
In pandas 1.5.1, this raises ParserError: Too many columns specified: expected 3 and found 2
, which is an understandable exception. However, also in pandas 1.5.1, if multiple rows are being parsed, it does not raise a ParserError
, and treats all values in column c
as NaN
(like in pandas 1.3.4):
pd.read_csv(StringIO("1,2\n4,5\n"), names=['a', 'b', 'c'])
Expected Behavior
read_csv
should have consistent behavior when the length of names
exceeds the number of parsed columns, regardless of the number of rows being parsed. i.e., either treat the missing columns as NaN
/null consistently (like in pandas 1.3.4), or raise a ParserError
consistently, regardless of the number of rows being parsed.
Installed Versions
Comment From: vamsi-verma-s
fixed in https://github.com/pandas-dev/pandas/pull/49182