I am encountering an issue where I am unable to retrieve the inference process for the DeepSeek-R1 model. Despite following the standard procedures, the system fails to fetch or display the expected results during inference. I have verified the setup and the model, but the issue persists. Could you please help investigate the cause and suggest any possible solutions?
Comment From: kevintsai1202
I am encountering an issue where I am unable to retrieve the inference process for the DeepSeek-R1 model. Despite following the standard procedures, the system fails to fetch or display the expected results during inference. I have verified the setup and the model, but the issue persists. Could you please help investigate the cause and suggest any possible solutions?
Recently, the API for R1 frequently experiences timeouts and ultimately returns an empty string. You might be encountering this type of issue.
Comment From: ZYMCao
I encountered the same issue as 2lianna mentioned. I think it boils down to the fact that after the releases of DeepSeek models, we now receive Inference Process or "thinking" from models as response in addition to "content" of responses.
This is the case even for ChatGPT if we click on the Reason button: