Skip to content

Middleware

Reference docs

This page contains reference documentation for OpenAI Middleware. See the docs for conceptual guides, tutorials, and examples on using OpenAI Middleware.

Middleware classes

Provider-specific middleware for OpenAI models:

CLASS DESCRIPTION
OpenAIModerationMiddleware Moderate agent traffic using OpenAI's moderation endpoint

langchain_openai.middleware.OpenAIModerationMiddleware

Moderate agent traffic using OpenAI's moderation endpoint.

METHOD DESCRIPTION
__init__

Create the middleware instance.

__init__

__init__(
    *,
    model: ModerationModel = "omni-moderation-latest",
    check_input: bool = True,
    check_output: bool = True,
    check_tool_results: bool = False,
    exit_behavior: Literal["error", "end", "replace"] = "end",
    violation_message: str | None = None,
    client: OpenAI | None = None,
    async_client: AsyncOpenAI | None = None,
) -> None

Create the middleware instance.

PARAMETER DESCRIPTION
model

OpenAI moderation model to use.

TYPE: ModerationModel DEFAULT: 'omni-moderation-latest'

check_input

Whether to check user input messages.

TYPE: bool DEFAULT: True

check_output

Whether to check model output messages.

TYPE: bool DEFAULT: True

check_tool_results

Whether to check tool result messages.

TYPE: bool DEFAULT: False

exit_behavior

How to handle violations ('error', 'end', or 'replace').

TYPE: Literal['error', 'end', 'replace'] DEFAULT: 'end'

violation_message

Custom template for violation messages.

TYPE: str | None DEFAULT: None

client

Optional pre-configured OpenAI client to reuse. If not provided, a new client will be created.

TYPE: OpenAI | None DEFAULT: None

async_client

Optional pre-configured AsyncOpenAI client to reuse. If not provided, a new async client will be created.

TYPE: AsyncOpenAI | None DEFAULT: None