This chain implementation is roughly equivalent to Langchain's LLMChain, although it does not (yet) have asynchronous implementations of its apply() methods.

The test shows a basic use of this chain, but note that it can also be declared as a bean...

    @Bean
    AiChain llmChain(AiClient aiClient, @Value("classpath:/chat-prompt.st") Resource prompt) {
        PromptTemplate promptTemplate = new PromptTemplate(prompt);
        return new AiChain(
                aiClient,
                promptTemplate,
                "data",
                new StringOutputParser());
    }

...and then injected into any bean where it's needed.

Also note that although the Chain implementation extends Function<AiInput, AiOuput>, which means any implementation must implement AiOutput apply(AiInput), I chose to also override apply() with one that takes the input variables as a Map and returns a String for simpler usage. This is not only a convenience (much like the overridden generate() method in AiClient), but also is consistent with how Langchain chain implementations have two methods, one that returns LLMResult and one that returns a simple string.

Comment From: markpollack

This PR and the other one for ChatConversationChain made me revisit the whole 'chain' idea and I'd like to abandon it. I'd like to follow an approach of 'helper classes' that take care of a specific use case vs. getting involved with workflow. The two or three helper classes I can see coming out of this is an GenerationTemplate, that has a similar structure to JdbcTemplate, it makes the use of the three collaboration objects (AiClient, Prompt, OutputParser) a one liner in many cases, as an example

    String generate(String message);

    String generate(String message, Map<String, Object> model);

    String generate(PromptTemplate promptTemplate, Map<String, Object> model);

    <T> T generate(String message, Class<T> elementType, Map<String, Object> model);

    <T> T generate(String message, OutputParser<T> parser, Map<String, Object> model);

    <T> T generate(PromptTemplate promptTemplate, OutputParser<T> parser, Map<String, Object> model);

and other methods that return List<T>.

Example usage

ActorsFilms actorsFilms = generateTemplate.generate("Generate the filmography for the actor {actor}",
                                            ActorsFilms.class, Map.of("actor", "Tom Hanks"));

Simple "chains" for flows can be done with standard Java functional programming, and we can see how our needs for a "chain" or a "flow" evolve over time. e.g. the example of a chain from langchain using functional programming

    @Test
    void functionalChains() {
        Function<String, String> combinedFunction = generateSynopsis.andThen(generateReview);
        System.out.println(combinedFunction.apply("Tragedy at sunset on the beach"));
    }

    private Function<String, String> generateSynopsis = title -> {
        String synopsisInput = """
                You are a playwright. Given the title of play, it is your job to write a synopsis for that title.

                Title: {title}
                Playwright: This is a synopsis for the above play:""";

        return generateTemplate.generate(synopsisInput, Map.of("title", title));
    };

    private Function<String, String> generateReview = synopsis -> {
        String synopsisInput = """
                You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.

                Play Synopsis:
                {synopsis}
                Review from a New York Times play critic of the above play:""";

        return generateTemplate.generate(synopsisInput, Map.of("synopsis", synopsis));
    };

Comment From: markpollack

The other 'helper' classes would be ChatEngine and SummaryIndex inspired by llamaindex, but also the new pinecone canopy project created a similar ChatEngine for use with RAG.

See * https://ts.llamaindex.ai/end_to_end
* https://docs.llamaindex.ai/en/stable/examples/index_structs/doc_summary/DocSummary.html * https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/root.html * https://github.com/pinecone-io/canopy/blob/main/src/canopy_server/app.py

Comment From: markpollack

Closing in favor of https://github.com/spring-projects-experimental/spring-ai/issues/108