# create_context_cache

> **Function** in `langchain_google_vertexai`

📖 [View in docs](https://reference.langchain.com/python/langchain-google-vertexai/utils/create_context_cache)

Creates a cache for content in some model.

## Signature

```python
create_context_cache(
    model: ChatVertexAI,
    messages: list[BaseMessage],
    expire_time: datetime | None = None,
    time_to_live: timedelta | None = None,
    tools: _ToolsType | None = None,
    tool_config: _ToolConfigDict | None = None,
) -> str
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `model` | `ChatVertexAI` | Yes | `ChatVertexAI` model. Must be at least `gemini-2.5-pro` or `gemini-2.0-flash`. |
| `messages` | `list[BaseMessage]` | Yes | List of messages to cache. |
| `expire_time` | `datetime \| None` | No | Timestamp of when this resource is considered expired.  At most one of `expire_time` and `time_to_live` can be set. If neither is set, default TTL on the API side will be used (currently 1 hour). (default: `None`) |
| `time_to_live` | `timedelta \| None` | No | The TTL for this resource. If provided, the expiration time is computed as `created_time` + TTL.  At most one of `expire_time` and `time_to_live` can be set. If neither is set, default TTL on the API side will be used (currently 1 hour). (default: `None`) |
| `tools` | `_ToolsType \| None` | No | A list of tool definitions to bind to this chat model.  Can be a Pydantic model, `Callable`, or `BaseTool`. Pydantic models, `Callable`, and `BaseTool` will be automatically converted to their schema dictionary representation. (default: `None`) |
| `tool_config` | `_ToolConfigDict \| None` | No | Optional. Immutable. Tool config. This config is shared for all tools. (default: `None`) |

## Returns

`str`

String with the identificator of the created cache.

---

[View source on GitHub](https://github.com/langchain-ai/langchain-google/blob/a3f016b2a6c4af535df275545f76fa7424aa39e5/libs/vertexai/langchain_google_vertexai/utils.py#L20)