This commit introduces a new proxyToolCalls option for various chat models in the Spring AI project. When enabled, it allows the client to handle function calls externally instead of being processed internally by Spring AI.

The change affects multiple chat model implementations, including: AnthropicChatModel AzureOpenAiChatModel MiniMaxChatModel MistralAiChatModel MoonshotChatModel OllamaChatModel OpenAiChatModel VertexAiGeminiChatModel ZhiPuAiChatModel

The proxyToolCalls option is added to the respective chat options classes and integrated into the AbstractToolCallSupport class for consistent handling across different implementations.

The proxyToolCalls option can be set either programmatically via the ChatOptions.builder().withProxyToolCalls() method or the spring.ai..chat.options.proxy-tool-calls application property.

Documentation for the new option is also updated in the relevant Antora pages.

Resolves #1367

Comment From: tzolov

Rebased, squashed and merged at 501774925c809fb47e02c73688092e46cdb78099

Comment From: JogoShugh

This is awesome. Thank you!

A few months ago when I was just getting started calling OpenAI, I was auto-generating JSON schema calling Jackson's JsonSchemaGenerator by hand. I just read over all this stuff in Spring AI this morning and after finding this issue, it's doing it all for me perfectly.

The ability to get the arguments in JSON is critical for me as I want to be able to have a "preview" kind of flow where users can confirm a spoken command's AI-assisted translation-to-json or hand edit it before I call my own "confirm" step with the payload.

If anyone needs it for Kotlin, this is how I just got the build and tested:

Update build.gradle.kts

Set to the build found in the repo:

//extra["springAiVersion"] = "1.0.0-M2"
extra["springAiVersion"] = "1.0.0-20241008.115115-715"

Register a function

 data class  PrepareBed(
      val bedId: UUID,
      val name: String,
      val dimensions: Dimensions,
      val cellBlockSize: Int = 1
 )

@Configuration
class BedFunctionsConfig(val objectMapper: ObjectMapper) {

    @Bean
    fun prepareBedCallback(repository: BedRepository): FunctionCallback =
        FunctionCallbackWrapper.builder { command: PrepareBed ->
                val bed = BedAggregate.of(command.bedId, command.name, command.dimensions, command.cellBlockSize)
                repository.addBed(bed)
                val resource = BedResourceWithCurrentState.from(bed)
                // resource.includeSchemas() // <-- This was my hand-rolled Jackson schema functionality
                resource
            }
            .withName("prepareBed")
            .withDescription("Prepare a garden bed")
            .withObjectMapper(objectMapper)
            .withInputType(PrepareBed::class.java)
            .build()
}

Implement a controller that calls Spring AI

@RestController
class BedCommandHandler(
    chatClientBuilder: ChatClient.Builder,
    private val mapper: ObjectMapper
) {
    private val client : ChatClient =
        chatClientBuilder
            .defaultAdvisors(
                SimpleLoggerAdvisor()
            )
            .build()

    @PostMapping("/api/beds/{bedId}/action")
    suspend fun action(@PathVariable bedId: UUID,
                     @RequestParam("prompt") prompt: String) : Flux<String> {
        val systemMessage = SystemMessage(getSystemPrompt(bedId))
        val userMessage = UserMessage(prompt)

        val response = client.prompt()
            .options(
                FunctionCallingOptionsBuilder().withProxyToolCalls(true).build()
            )
            .functions("prepareBed")
            .messages(systemMessage, userMessage)
            .call()
            .chatResponse()

        val args = response.result.output.toolCalls[0].arguments

        val tree = mapper.readTree(args)
        val pretty = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(tree)
        return Flux.just("Does this look correct to you?\n\n$pretty")
    }
}

Observe the serialized payload in toolCalls

I issue a command to the api with ./function.sh "Prepare Venus bed 4 by 8, 1 foot high"

Then, I can see it properly serializing the generated payload into toolCalls:

Spring-ai Add proxy tool calls option to chat models

As stated above, this will be absolutely outstanding for me because I want to be able to take these results as "previews" and present them to the user for Confirmation that the AI-interpreted translation of their spoken or written intention is accurate before calling the actual function to invoke the behavior.

See the pretty printed API response preview (in my use case)

Of course, I'll format this for UI when I get there, but for now:

./function.sh "Prepare Venus bed 4 by 8, 1 foot high"

* Connected to localhost (::1) port 8080
> POST /api/beds/2fbda883-d49d-4067-8e16-2b04cc523111/action HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.4.0
> Accept: */*
> 
< HTTP/1.1 200 
< Content-Type: text/plain
< Transfer-Encoding: chunked
< 

Does this look correct to you?

{
  "bedId" : "2fbda883-d49d-4067-8e16-2b04cc523111",
  "name" : "Venus",
  "dimensions" : {
    "columns" : 4,
    "rows" : 8,
    "height" : 1
  }
* Connection #0 to host localhost left intact
}

Thanks again, this is excellent stuff!