# sanitize

> **Function** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/utilities/opaqueprompts/sanitize)

Sanitize input string or dict of strings by replacing sensitive data with
placeholders.

It returns the sanitized input string or dict of strings and the secure
context as a dict following the format:
{
    "sanitized_input": <sanitized input string or dict of strings>,
    "secure_context": <secure context>
}

The secure context is a bytes object that is needed to de-sanitize the response
from the LLM.

## Signature

```python
sanitize(
    input: Union[str, Dict[str, str]],
) -> Dict[str, Union[str, Dict[str, str]]]
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `input` | `Union[str, Dict[str, str]]` | Yes | Input string or dict of strings. |

## Returns

`Dict[str, Union[str, Dict[str, str]]]`

Sanitized input string or dict of strings and the secure context

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/4b280287bd55b99b44db2dd849f02d66c89534d5/libs/community/langchain_community/utilities/opaqueprompts.py#L4)