Moderate agent traffic using OpenAI's moderation endpoint.
OpenAIModerationMiddleware(
self,
*,
model: ModerationModel = 'omni-moderation-latest',
check_input: bool = True,
check_output: bool = True,
check_tool_results: bool = False,
exit_behavior: Literal['error', 'end', 'replace'] = 'end',
violation_message: str | None = None,
client: OpenAI | None = None,
async_client: AsyncOpenAI | None = None
)| Name | Type | Description |
|---|---|---|
model | ModerationModel | Default: 'omni-moderation-latest'OpenAI moderation model to use. |
check_input | bool | Default: TrueWhether to check user input messages. |
check_output | bool | Default: TrueWhether to check model output messages. |
check_tool_results | bool | Default: FalseWhether to check tool result messages. |
exit_behavior | Literal['error', 'end', 'replace'] | Default: 'end'How to handle violations
( |
violation_message | str | None | Default: NoneCustom template for violation messages. |
client | OpenAI | None | Default: NoneOptional pre-configured OpenAI client to reuse. If not provided, a new client will be created. |
async_client | AsyncOpenAI | None | Default: NoneOptional pre-configured AsyncOpenAI client to reuse. If not provided, a new async client will be created. |