perform_offload(
*,
messages: list[Any],
prior_event: SummarizationEvent | None,| Name | Type | Description |
|---|---|---|
messages* | list[Any] | Current conversation messages from agent state. |
prior_event* | SummarizationEvent | None | Existing |
thread_id* | str | Thread identifier for backend storage. |
model_spec* | str | |
profile_overrides* | dict[str, Any] | None | |
context_limit* | int | None | |
total_context_tokens* | int | |
backend* | BackendProtocol | None |
Execute the offload workflow: summarize old messages and free context.
Model specification string (e.g. "openai:gpt-4").
Optional profile overrides from CLI flags.
Model context limit from settings.
Current total context token count, or 0 when
no token tracker is available.
Backend for persisting offloaded history.