Middleware to integrate Model Armor sanitization into agent execution.
This middleware provides hooks that sanitize user prompts before they reach the model and sanitize model responses before they're returned to the user.
Sanitization is enabled by providing the corresponding runnable:
prompt_sanitizer to enable user prompt sanitizationresponse_sanitizer to enable model response sanitizationModelArmorMiddleware(
self,
*,
prompt_sanitizer: ModelArmorSanitizePromptRunnable | None = None,
response_sanitizer: ModelArmorSanitizeResponseRunnable | None = None
)lc_agents_middleware.AgentMiddlewareExample:
from langchain.agents import create_agent
from langchain_google_vertexai import ChatVertexAI
from langchain_google_community.model_armor import (
ModelArmorMiddleware,
ModelArmorSanitizePromptRunnable,
ModelArmorSanitizeResponseRunnable,
)
# Create sanitizer runnables
prompt_sanitizer = ModelArmorSanitizePromptRunnable(
project="my-project",
location="us-central1",
template_id="my-template",
fail_open=False,
)
response_sanitizer = ModelArmorSanitizeResponseRunnable(
project="my-project",
location="us-central1",
template_id="my-template",
fail_open=False,
)
# Create middleware with both sanitizers
middleware = ModelArmorMiddleware(
prompt_sanitizer=prompt_sanitizer,
response_sanitizer=response_sanitizer,
)
# Create agent with Model Armor protection
agent = create_agent(
model=ChatVertexAI(model_name="gemini-2.0-flash-001"),
tools=[...],
middleware=[middleware],
)
# Or create middleware with only prompt sanitization
middleware = ModelArmorMiddleware(prompt_sanitizer=prompt_sanitizer)| Name | Type | Description |
|---|---|---|
prompt_sanitizer | ModelArmorSanitizePromptRunnable | None | Default: NoneRunnable for sanitizing user prompts before model calls. If provided, prompt sanitization is enabled. If None, prompts are not sanitized. |
response_sanitizer | ModelArmorSanitizeResponseRunnable | None | Default: NoneRunnable for sanitizing model responses. If provided, response sanitization is enabled. If None, responses are not sanitized. |
| Name | Type |
|---|---|
| prompt_sanitizer | ModelArmorSanitizePromptRunnable | None |
| response_sanitizer | ModelArmorSanitizeResponseRunnable | None |
Sanitize user prompts before sending to the model.
This hook is called before the model processes the input. It extracts the latest user message and sanitizes it using Model Armor.
Sanitize model responses before returning to the user.
This hook is called after the model generates a response. We sanitize the AI's response to ensure it doesn't contain harmful content.