Model Armor Middleware for LangChain Agents.
This module provides middleware for integrating Model Armor sanitization into LangChain agents created with the create_agent API.
Runnable to sanitize user prompts using Model Armor.
Runnable to sanitize LLM responses using Model Armor.
Middleware to integrate Model Armor sanitization into agent execution.
This middleware provides hooks that sanitize user prompts before they reach the model and sanitize model responses before they're returned to the user.
Sanitization is enabled by providing the corresponding runnable:
prompt_sanitizer to enable user prompt sanitizationresponse_sanitizer to enable model response sanitization