# ModelArmorMiddleware

> **Class** in `langchain_google_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-google-community/model_armor/middleware/ModelArmorMiddleware)

Middleware to integrate Model Armor sanitization into agent execution.

This middleware provides hooks that sanitize user prompts before they reach
the model and sanitize model responses before they're returned to the user.

Sanitization is enabled by providing the corresponding runnable:
- Provide `prompt_sanitizer` to enable user prompt sanitization
- Provide `response_sanitizer` to enable model response sanitization

## Signature

```python
ModelArmorMiddleware(
    self,
    *,
    prompt_sanitizer: ModelArmorSanitizePromptRunnable | None = None,
    response_sanitizer: ModelArmorSanitizeResponseRunnable | None = None,
)
```

## Description

**Example:**

```python
from langchain.agents import create_agent
from langchain_google_vertexai import ChatVertexAI
from langchain_google_community.model_armor import (
    ModelArmorMiddleware,
    ModelArmorSanitizePromptRunnable,
    ModelArmorSanitizeResponseRunnable,
)

# Create sanitizer runnables
prompt_sanitizer = ModelArmorSanitizePromptRunnable(
    project="my-project",
    location="us-central1",
    template_id="my-template",
    fail_open=False,
)
response_sanitizer = ModelArmorSanitizeResponseRunnable(
    project="my-project",
    location="us-central1",
    template_id="my-template",
    fail_open=False,
)

# Create middleware with both sanitizers
middleware = ModelArmorMiddleware(
    prompt_sanitizer=prompt_sanitizer,
    response_sanitizer=response_sanitizer,
)

# Create agent with Model Armor protection
agent = create_agent(
    model=ChatVertexAI(model_name="gemini-2.0-flash-001"),
    tools=[...],
    middleware=[middleware],
)

# Or create middleware with only prompt sanitization
middleware = ModelArmorMiddleware(prompt_sanitizer=prompt_sanitizer)
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `prompt_sanitizer` | `ModelArmorSanitizePromptRunnable \| None` | No | Runnable for sanitizing user prompts before model calls. If provided, prompt sanitization is enabled. If None, prompts are not sanitized. (default: `None`) |
| `response_sanitizer` | `ModelArmorSanitizeResponseRunnable \| None` | No | Runnable for sanitizing model responses. If provided, response sanitization is enabled. If None, responses are not sanitized. (default: `None`) |

## Extends

- `lc_agents_middleware.AgentMiddleware`

## Constructors

```python
__init__(
    self,
    *,
    prompt_sanitizer: ModelArmorSanitizePromptRunnable | None = None,
    response_sanitizer: ModelArmorSanitizeResponseRunnable | None = None,
)
```

| Name | Type |
|------|------|
| `prompt_sanitizer` | `ModelArmorSanitizePromptRunnable \| None` |
| `response_sanitizer` | `ModelArmorSanitizeResponseRunnable \| None` |


## Properties

- `prompt_sanitizer`
- `response_sanitizer`

## Methods

- [`before_model()`](https://reference.langchain.com/python/langchain-google-community/model_armor/middleware/ModelArmorMiddleware/before_model)
- [`after_model()`](https://reference.langchain.com/python/langchain-google-community/model_armor/middleware/ModelArmorMiddleware/after_model)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-google/blob/982e4015b249de8b9ba1e787746d8cc1f6d6b790/libs/community/langchain_google_community/model_armor/middleware.py#L25)