Base runnable for user prompt or model response sanitization using Model Armor.
Middleware to integrate Model Armor sanitization into agent execution.
This middleware provides hooks that sanitize user prompts before they reach the model and sanitize model responses before they're returned to the user.
Sanitization is enabled by providing the corresponding runnable:
prompt_sanitizer to enable user prompt sanitizationresponse_sanitizer to enable model response sanitizationRunnable to sanitize user prompts using Model Armor.
Runnable to sanitize LLM responses using Model Armor.
Model Armor Middleware for LangChain Agents.
This module provides middleware for integrating Model Armor sanitization into LangChain agents created with the create_agent API.
Langchain Runnables to screen user prompt and/or model response using Google Cloud Model Armor.
Before using Model Armor Runnables, ensure the following steps are completed:
Select or create a Google Cloud Platform project.
Enable billing for your project.
Enable the Model Armor API in your GCP project.
Grant the modelarmor.user IAM role to any user or service account that will use
the Model Armor runnables.
Authentication:
Create Model Armor Template:
modelarmor.admin IAM role is required.This module provides a base class for creating runnables that sanitize user prompts and model responses using Google Cloud Model Armor. Ref: https://cloud.google.com/security-command-center/docs/model-armor-overview