langchain.js
    Preparing search index...

    Function openAIModerationMiddleware

    Provider specific middleware

    • Middleware that moderates agent traffic using OpenAI's moderation endpoint.

      This middleware checks messages for content policy violations at different stages:

      • Input: User messages before they reach the model
      • Output: AI model responses
      • Tool results: Results returned from tool executions

      Parameters

      • options: OpenAIModerationMiddlewareOptions

        Configuration options for the middleware

        Options for configuring the OpenAI Moderation middleware.

        • OptionalcheckInput?: boolean

          Whether to check user input messages.

          true
          
        • OptionalcheckOutput?: boolean

          Whether to check model output messages.

          true
          
        • OptionalcheckToolResults?: boolean

          Whether to check tool result messages.

          false
          
        • OptionalexitBehavior?: "end" | "error" | "replace"

          How to handle violations.

          • "error": Throw an error when content is flagged
          • "end": End the agent execution and return a violation message
          • "replace": Replace the flagged content with a violation message
          "end"
          
        • model: any

          OpenAI model to use for moderation. Can be either a model name or a BaseChatModel instance.

          const model = new ChatOpenAI({ model: "gpt-4o-mini" });
          const middleware = openAIModerationMiddleware({ model });
          const agent = createAgent({
          model,
          middleware: [middleware],
          });
          const middleware = openAIModerationMiddleware({ model: "gpt-4o-mini" });
          const agent = createAgent({
          model: "gpt-5",
          middleware: [middleware],
          });
        • OptionalmoderationModel?: ModerationModel

          Moderation model to use.

          "omni-moderation-latest"
          
        • OptionalviolationMessage?: string

          Custom template for violation messages. Available placeholders: {categories}, {category_scores}, {original_content}

      Returns AgentMiddleware

      Middleware function that can be used to moderate agent traffic.

      import { createAgent, openAIModerationMiddleware } from "langchain";

      const middleware = openAIModerationMiddleware({
      checkInput: true,
      checkOutput: true,
      exitBehavior: "end"
      });

      const agent = createAgent({
      model: "openai:gpt-4o",
      tools: [...],
      middleware: [middleware],
      });
      import { createAgent, openAIModerationMiddleware } from "langchain";

      const middleware = openAIModerationMiddleware({
      model: "gpt-4o-mini",
      checkInput: true,
      checkOutput: true,
      exitBehavior: "end"
      });

      const agent = createAgent({
      model: "openai:gpt-4o",
      tools: [...],
      middleware: [middleware],
      });
      const middleware = openAIModerationMiddleware({
      violationMessage: "Content flagged: {categories}. Scores: {category_scores}"
      });