When the tool calling is used, the Chat Model's Chat response metadata doesn't accumulate the usage metrics (accumulated prompt, chat completion and total tokens) from all the Chat responses involved - including the tool calling.

This epic addresses the fix needed for all the supported models.

  • [x] AnthropicChatModel : https://github.com/spring-projects/spring-ai/pull/1918
  • [x] AzureOpenAiChatModel : https://github.com/spring-projects/spring-ai/pull/1916
  • [x] BedrockProxyChatModel : https://github.com/spring-projects/spring-ai/issues/1743
  • [x] OpenAiChatModel : https://github.com/spring-projects/spring-ai/pull/1872
  • [ ] MiniMaxChatModel
  • [x] MistralAiChatModel : https://github.com/spring-projects/spring-ai/pull/1905
  • [ ] MoonshotChatModel
  • [x] OllamaChatModel
  • [ ] VertexAiGeminiChatModel
  • [ ] ZhiPuAiChatModel

Comment From: ilayaperumalg

This is a duplicate of https://github.com/spring-projects/spring-ai/issues/1800 and hence closing this as invalid.