I added an OllamaChatRequestOptions class and wired it into call() and stream() in OllamaChatClient. It's optional and won't break existing code, but it provides a couple of new options that I find very convenient.

The keep-alive option will keep your model loaded in Ollama for the duration you specify. The format option was there in part already, but not fully wired up. There was no way to actually use it from an application. It is a way to tell Ollama that you only want JSON output instead of chat.

Comment From: tzolov

Hey @scionaltera , Thanks for adding support for the missing, additional properties.

I believe thought that we can reuse the existing OllamaOptions class rather than creating a new one. This has multiple benefits as it allows auto-configuration properties stetting as well as the Run-time options.

So, IMO we an add the Ollama additional properties to the existing OllamaOptions.java (updating the builder) and use it in the ChatClient to initialise the ChatRequest (feel free to extend the ChatRequest with the missing properties). But then we have to make sure that those new properties added to the OllamOptionsn are not passed as options. For this you need to extend the filterNonSupportedFields to filter those out before the options are passed to the request.

I hope this makes sense. Let me know if you would interested to give it a try. Otherwise i will do it myself.

Comment From: scionaltera

No problem, I thought that might be the desired approach but I hesitated because the parameters I was working with seemed to have very little to do with the ones in OllamaOptions. I'm happy to refactor it.

Comment From: tzolov

Thanks @scionaltera , I agree that filterNonSupportedFields reveals a bad design, there are similar issues with some of the other model options. We are exploring the possibility to split the ModelOptions in to sub-category (e.g. model-related and non-model related such as request params, function calling configs ..).
But for now we have cope with the existing limitations.

Comment From: tzolov

@scionaltera, can you please pull/rebase our code on top of upstream/main without merging (e.g. git pull -r upstream/main). Otherwise you can open a new PR with your latest code on top of upstream/main

Comment From: tzolov

Squashed and merged at: aca8ffd400fd3e3851aec8dc6705c6613d456073

Additionally: - add support for the template advanced parameter. - update Ollama docs with the new config parameters.

Comment From: tzolov

Thank you again @scionaltera . For consistency I added the template parameter as well and updated the docs. What value for the keep-alive are you commonly using? I see that it is 5 min by default.

Comment From: scionaltera

I've been setting it to 1 hour. That's long enough that I can go make changes in my code and the model is still there when I'm ready to run it again.