-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Added support for OpenAI's content policy moderation API #333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
a471c0e to
06070e7
Compare
06070e7 to
dd9e217
Compare
|
Hi @hemeda3 , thank you for your contribution. The safety moderation is something we've been considering for a while but still have not tackled. As Spring AI supports multiple LLMs providers and their APIs it important for as to come with a design that "generalise" and "unifies" (as much as possible) features such as the For example most LLM provides offer some sort of safety policies: Google's Vertex Gemini and PaLM2 has this:
Microsoft Azure OpenAi: Amazon Bedrok Guardrails OpenAI @markpollack what do you think? |
|
@tzolov thanks for taking the time to check my PR. https://docs.spring.io/spring-ai/reference/api/index.html#generic-model-api but seems you're talking about new or different approach I guess something more generic than the one mentioned in this image, I guess it should be sharable across models which I believe will be new structure/design specifically diff from this structure https://docs.spring.io/spring-ai/reference/api/index.html#generic-model-api if I understood your comment correctly then I think we can close this PR since it's too early also I can also create new PR more generic way, assuming something different than OpenAiImageApi |
|
@hemeda3, I'm truly impressed with your effort and understanding of our somewhat poorly documented concepts. |
|
I can not wait to see this feature in the upcoming releases. Please prioritize it. |
|
Hi. I understand the concern of putting classes in the generic model package before having a second impl, but I figure we can adjust as we go considering this is a very clean PR. Thanks @hemeda3 ! very much appreciated. Added docs and merged into current naming conventions and use of new approach for retry/errorhandler since this PR was authored. merged in 1894681 |

Please review
Given a input text, outputs if the model classifies it as violating OpenAI's content policy.
Related guide: Moderations