This may or may not be considered a bug, but I do wonder what the expected behavior should be.

Put simply, when using SafeGuardAroundAdvisor, if the advisor detects a sensitive word in the prompt, it returns a ChatResponse with an empty generations list. Because it's empty, ChatResponse returns null from getResult() (see https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/model/ChatResponse.java#L77-L79 ). And because that returns null, the content() method throws a NullPointerException at https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/client/DefaultChatClient.java#L393 .

Is this the expected behavior? Seems wrong to me. Perhaps, instead of returning an empty list, SafeGuardAroundAdvisor could return some default response stating that because of safeguards, the prompt can't be generated. And that response could be overridden by providing an alternate statement when creating SafeGuardAroundAdvisor.

Alternatively (and arguably better): Instead of returning an empty list or a canned response, have it throw a more meaningful exception. Say, something like SafeGuardException or some such thing.

I'll be happy to submit a pull request for such a change, but before I do, I want to make sure that I understand what the best behavior should be when encountering a sensitive word.

Comment From: tzolov

Thank you @habuma

This https://github.com/spring-projects/spring-ai/pull/1505 address your suggestions. It also allows to configure your own response message or control the advisor order. There is a companion Builder to help configure those if needed.