Sanitize an LLM response using Model Armor.
invoke(
self,
input: T,
config: Optional[RunnableConfig] = None,
fail_open: Optional[bool] = None,
**kwargs: Any = {}
) -> T| Name | Type | Description |
|---|---|---|
input* | T | The model response to sanitize. Can be |
config | Optional[RunnableConfig] | Default: NoneA config to use when invoking the |
fail_open | Optional[bool] | Default: NoneIf |