Bug description I have two functions that I have specified in Chat Options. I can see from the logs that these functions are getting invoked by the LLM, but when I try to retrieve the function details using the getToolCalls() method they are not being returned (even when the functions have been invoked by the LLM).

Code snippet -

 Prompt prompt = new Prompt(List.of(systemMessage, userMessage),
                                OpenAiChatOptions.builder().withTemperature(0.7f)
                                                .withModel("gpt-4o")
                                                .withFunction("findPapers")
                                                .withFunction("summarizePaper")
                                                .withParallelToolCalls(false)
                                                .build());
                Flux<ChatResponse> chatResponseStream = chatModel.stream(prompt);

                chatResponseStream.map(response -> response.getResult().getOutput().getToolCalls())
                                .doOnNext(toolCalls -> {
                                        logger.info("Tool calls: {}", toolCalls); // returns an empty list when the function has actually been invoked
                                        }
                                })
                                .onErrorContinue((e, o) -> logger.error("Error occurred while processing chat response",
                                                e))
                                .subscribe();

Environment Spring AI version - 1.0.0-M2

Steps to reproduce Please see sample code above for the steps.

Expected behavior Expecting the getTools() method to return the list of functions invoked by the LLM.

Comment From: JogoShugh

How did you find logs to show you that? I am having the same problem. A few months ago, I was auto-generating json schemas for some kotlin data classes and populating them into my system message, but I figured I would try this Function support that auto-generates it too but is baked into Spring AI formally....but I'm seeing the same issue as you and trying to troubleshoot now.

Comment From: majian159

+1

Comment From: markpollack

The misconception is that the call response.getResult().getOutput().getToolCalls() would return the conversation that happened back and forth with the model for the tool requests, there can be multiple tool calls.

The only way now to gain access to that conversation is to take over control of the function calling youself via the so called 'proxy' feature in Spring AI. An example of getting close to the conversation is here.

That said, we would like to collect this information for you so you don't have to get down so low into the code. To achieve that we need to improve the class ChatGenerationMetadata to contain a hashmap. I've created an issue for this #1722

Comment From: gaplo917

Similar to what @markpollack said, I experience the same when using Ollama sync case, you need to add this line to get control.

        val prompt = Prompt(
            "what's news in kotlin 2.1",
            OllamaOptions.builder()
                .withFunctionCallbacks(listOf(getExternalKnowledge))
                .withToolContext(mapOf("userId" to "user123"))
+                .withProxyToolCalls(true)
                .build()
        )

When withProxyToolCalls(false) Spring-ai Functions specified in Chat Options is not returned in the getToolCalls() method

When withProxyToolCalls(true), you will get back the control but you need to trigger the function on your own. Spring-ai Functions specified in Chat Options is not returned in the getToolCalls() method

@markpollack There is also a bug on calculating the token usage, when withProxyToolCalls(false). It doesn't calculate the tools call usage. I'm not sure if other platforms would have the same behaviour.

Suggestion

Capture all the generations into results list.