# OpenAIModerationMiddleware

> **Class** in `langchain_openai`

📖 [View in docs](https://reference.langchain.com/python/langchain-openai/middleware/openai_moderation/OpenAIModerationMiddleware)

Moderate agent traffic using OpenAI's moderation endpoint.

## Signature

```python
OpenAIModerationMiddleware(
    self,
    *,
    model: ModerationModel = 'omni-moderation-latest',
    check_input: bool = True,
    check_output: bool = True,
    check_tool_results: bool = False,
    exit_behavior: Literal['error', 'end', 'replace'] = 'end',
    violation_message: str | None = None,
    client: OpenAI | None = None,
    async_client: AsyncOpenAI | None = None,
)
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `model` | `ModerationModel` | No | OpenAI moderation model to use. (default: `'omni-moderation-latest'`) |
| `check_input` | `bool` | No | Whether to check user input messages. (default: `True`) |
| `check_output` | `bool` | No | Whether to check model output messages. (default: `True`) |
| `check_tool_results` | `bool` | No | Whether to check tool result messages. (default: `False`) |
| `exit_behavior` | `Literal['error', 'end', 'replace']` | No | How to handle violations (`'error'`, `'end'`, or `'replace'`). (default: `'end'`) |
| `violation_message` | `str \| None` | No | Custom template for violation messages. (default: `None`) |
| `client` | `OpenAI \| None` | No | Optional pre-configured OpenAI client to reuse. If not provided, a new client will be created. (default: `None`) |
| `async_client` | `AsyncOpenAI \| None` | No | Optional pre-configured AsyncOpenAI client to reuse. If not provided, a new async client will be created. (default: `None`) |

## Extends

- `AgentMiddleware[AgentState[Any], Any]`

## Constructors

```python
__init__(
    self,
    *,
    model: ModerationModel = 'omni-moderation-latest',
    check_input: bool = True,
    check_output: bool = True,
    check_tool_results: bool = False,
    exit_behavior: Literal['error', 'end', 'replace'] = 'end',
    violation_message: str | None = None,
    client: OpenAI | None = None,
    async_client: AsyncOpenAI | None = None,
) -> None
```

| Name | Type |
|------|------|
| `model` | `ModerationModel` |
| `check_input` | `bool` |
| `check_output` | `bool` |
| `check_tool_results` | `bool` |
| `exit_behavior` | `Literal['error', 'end', 'replace']` |
| `violation_message` | `str \| None` |
| `client` | `OpenAI \| None` |
| `async_client` | `AsyncOpenAI \| None` |


## Properties

- `model`
- `check_input`
- `check_output`
- `check_tool_results`
- `exit_behavior`
- `violation_message`

## Methods

- [`before_model()`](https://reference.langchain.com/python/langchain-openai/middleware/openai_moderation/OpenAIModerationMiddleware/before_model)
- [`after_model()`](https://reference.langchain.com/python/langchain-openai/middleware/openai_moderation/OpenAIModerationMiddleware/after_model)
- [`abefore_model()`](https://reference.langchain.com/python/langchain-openai/middleware/openai_moderation/OpenAIModerationMiddleware/abefore_model)
- [`aafter_model()`](https://reference.langchain.com/python/langchain-openai/middleware/openai_moderation/OpenAIModerationMiddleware/aafter_model)

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/9f232caa7a8fe1ca042a401942d5d90d54ceb1a6/libs/partners/openai/langchain_openai/middleware/openai_moderation.py#L49)