I've found a bug in read_sas, where on certain fields, 0.0 (floats) are read in as 5.397605e-79 (lowest IBM float value).

df1 = pd.read_sas('fname.xpt') df1['column_1'] 0 5.397605e-79 1 5.397605e-79 2 5.397605e-79 3 5.397605e-79 4 5.397605e-79 ...
423663 8.000000e+01 423664 8.000000e+01 423665 8.000000e+01 423666 8.000000e+01 423667 8.000000e+01 Name: column_1, Length: 423668, dtype: float64

Here is the same file read in with the xport (https://github.com/selik/xport) "to_dataframe"

with open('fname.xpt', 'rb') as f: df = xport.to_dataframe(f) df['column_1'] 0 0.0 1 0.0 2 0.0 3 0.0 4 0.0 ... 423663 80.0 423664 80.0 423665 80.0 423666 80.0 423667 80.0 Name: column_1, Length: 423668, dtype: float64

This only comes up occasionally, with a specific column in a specific file in a type of dataset that I work with. I probably will not be able to provide the example files, since these files always have proprietary information.

Let me know if there is any other information that would be useful to have.

INSTALLED VERSIONS ------------------ commit : None python : 3.6.1.final.0 python-bits : 64 OS : Windows OS-release : 10 machine : AMD64 processor : Intel64 Family 6 Model 78 Stepping 3, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 0.25.0 numpy : 1.17.0 pytz : 2019.2 dateutil : 2.8.0 pip : 9.0.1 setuptools : 28.8.0 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : 4.4.1 html5lib : None pymysql : None psycopg2 : 2.8.3 (dt dec pq3 ext lo64) jinja2 : None IPython : None pandas_datareader: None bs4 : None bottleneck : None fastparquet : None gcsfs : None lxml.etree : 4.4.1 matplotlib : 3.1.1 numexpr : None odfpy : None openpyxl : 2.6.2 pandas_gbq : None pyarrow : None pytables : None s3fs : None scipy : 1.3.1 sqlalchemy : None tables : None xarray : None xlrd : 1.2.0 xlwt : None xlsxwriter : None

Comment From: mroeschke

Sorry for the slow reply to this issue. We would need a minimal example file in order to sufficient test this locally in our continuous integration so we can confirm that this issue wouldn't appear once fixed (dummy data is fine).

Going to close in the meantime, but happy to reopen if we can get a minimal example file that reproduces this behavior.

Comment From: davidlorenzana

Hello, I found the same issue analyzing NHANES data. It happens for example when I open this file https://wwwn.cdc.gov/Nchs/Nhanes/2017-2018/ALQ_J.XPT

Check column ALQ121 doing:

df = pd.read_sas("ALQ_J.XPT", format='xport') df[df['ALQ121'] < 0.01]

Possible values are 0 to 10, 77, 99 or NaN as specified here https://wwwn.cdc.gov/Nchs/Nhanes/2017-2018/ALQ_J.htm#ALQ121, so 5.397605e-79 is anomalous.

Comment From: davidlorenzana

For now I solved it doing this dirty patch df = df.replace(5.397605346934028e-79, 0)

Comment From: briandamaged

This issue can also be observed in the 2019 Survey of Consumer Finances when using pandas==1.5.2:

https://www.federalreserve.gov/econres/files/scf2019x.zip

After you unzip the file, you can recreate the issue as follows:

import pandas as pd

everything = pd.read_sas("./scf2019x", format = 'xport')
df = everything[['X3509']]

df[df['X3509'] < 0.0001]

This produces the following output:

>>> df[df['X3509'] < 0.0001]
              X3509
0      5.397605e-79
1      5.397605e-79
2      5.397605e-79
3      5.397605e-79
4      5.397605e-79
...             ...
28865  5.397605e-79
28866  5.397605e-79
28867  5.397605e-79
28868  5.397605e-79
28869  5.397605e-79

[14252 rows x 1 columns]

@mroeschke : Should this ticket be reopened, or would it be better to create a new ticket?

Comment From: mroeschke

This would be better for a new ticket