langchain.js
    Preparing search index...

    Interface OpenAIModerationMiddlewareOptions

    Options for configuring the OpenAI Moderation middleware.

    interface OpenAIModerationMiddlewareOptions {
        checkInput?: boolean;
        checkOutput?: boolean;
        checkToolResults?: boolean;
        exitBehavior?: "end" | "error" | "replace";
        model: any;
        moderationModel?: ModerationModel;
        violationMessage?: string;
    }
    Index

    Properties

    checkInput?: boolean

    Whether to check user input messages.

    true
    
    checkOutput?: boolean

    Whether to check model output messages.

    true
    
    checkToolResults?: boolean

    Whether to check tool result messages.

    false
    
    exitBehavior?: "end" | "error" | "replace"

    How to handle violations.

    • "error": Throw an error when content is flagged
    • "end": End the agent execution and return a violation message
    • "replace": Replace the flagged content with a violation message
    "end"
    
    model: any

    OpenAI model to use for moderation. Can be either a model name or a BaseChatModel instance.

    const model = new ChatOpenAI({ model: "gpt-4o-mini" });
    const middleware = openAIModerationMiddleware({ model });
    const agent = createAgent({
    model,
    middleware: [middleware],
    });
    const middleware = openAIModerationMiddleware({ model: "gpt-4o-mini" });
    const agent = createAgent({
    model: "gpt-5",
    middleware: [middleware],
    });
    moderationModel?: ModerationModel

    Moderation model to use.

    "omni-moderation-latest"
    
    violationMessage?: string

    Custom template for violation messages. Available placeholders: {categories}, {category_scores}, {original_content}