OptionalcheckWhether to check user input messages.
OptionalcheckWhether to check model output messages.
OptionalcheckWhether to check tool result messages.
OptionalexitHow to handle violations.
"error": Throw an error when content is flagged"end": End the agent execution and return a violation message"replace": Replace the flagged content with a violation messageOpenAI model to use for moderation. Can be either a model name or a BaseChatModel instance.
OptionalmoderationModeration model to use.
OptionalviolationCustom template for violation messages.
Available placeholders: {categories}, {category_scores}, {original_content}
Options for configuring the OpenAI Moderation middleware.