The OpenAI Moderation is a tool you can use to check whether text is potentially harmful.
This PR content :
- Moderation API
- Moderation Client
- Moderation properties
- Test
Comment From: tzolov
@ricken07 if not mistaken there is already a similar PR : https://github.com/spring-projects/spring-ai/pull/333 by @hemeda3 The moderation is has high priority in our todo list. But we need to figure out what high level abstraction to provide that can be used across the various models.
Comment From: ThomasVitale
The OpenAI implementation of the ModerationModel interface has been delivered in https://github.com/spring-projects/spring-ai/commit/189468127c234ced9256f951023fc8fb29b08da2
Documentation: https://docs.spring.io/spring-ai/reference/api/moderation/openai-moderation.html
I think this issue can be closed now.
Comment From: markpollack
Closing, sorry for the cross talk/overlap between PRs. Hope to avoid this in the future by staying more on top of the PRs.