# perform_offload

> **Function** in `deepagents_cli`

📖 [View in docs](https://reference.langchain.com/python/deepagents-cli/offload/perform_offload)

Execute the offload workflow: summarize old messages and free context.

## Signature

```python
perform_offload(
    *,
    messages: list[Any],
    prior_event: SummarizationEvent | None,
    thread_id: str,
    model_spec: str,
    profile_overrides: dict[str, Any] | None,
    context_limit: int | None,
    total_context_tokens: int,
    backend: BackendProtocol | None,
) -> OffloadResult | OffloadThresholdNotMet
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `messages` | `list[Any]` | Yes | Current conversation messages from agent state. |
| `prior_event` | `SummarizationEvent \| None` | Yes | Existing `_summarization_event` if any. |
| `thread_id` | `str` | Yes | Thread identifier for backend storage. |
| `model_spec` | `str` | Yes | Model specification string (e.g. "openai:gpt-4"). |
| `profile_overrides` | `dict[str, Any] \| None` | Yes | Optional profile overrides from CLI flags. |
| `context_limit` | `int \| None` | Yes | Model context limit from settings. |
| `total_context_tokens` | `int` | Yes | Current total context token count, or `0` when no token tracker is available. |
| `backend` | `BackendProtocol \| None` | Yes | Backend for persisting offloaded history. |

## Returns

`OffloadResult | OffloadThresholdNotMet`

`OffloadResult` on success, `OffloadThresholdNotMet` when the
conversation is within the retention budget.

---

[View source on GitHub](https://github.com/langchain-ai/deepagents/blob/a9e6e4f7ad7fe161dd9affc3d74bb19784aca70b/libs/cli/deepagents_cli/offload.py#L210)