Expected Behavior
I would like to be able to extract SafetyRatings from ChatResponse returned by VertexAiGeminiChatClient.call(Prompt prompt). GenerateContentResponse contains list of SafetyRatings on each Candidate, so the data is already obtained in the library.
Current Behavior
VertexAiGeminiChatClient.call(Prompt prompt) returns ChatResponse which does not have information about SafetyRatings returned from Gemini.
Context
I'm not sure how many LLM models returns similar information, but it would be a useful feature to store it in generic format in Generation objects of ChatResponse, instead of loosing it in this abstraction layer.
Comment From: markpollack
Super important! The responses need a review and also tests for serialization.