Business logic for the /offload command.
Extracts the core offload workflow from the UI layer so it can be tested independently of the Textual app.
Create a chat model.
Uses init_chat_model for standard providers, or imports a custom
BaseChatModel subclass when the provider has a class_path in config.
Supports provider:model format (e.g., 'anthropic:claude-sonnet-4-5')
for explicit provider selection, or bare model names for auto-detection.
Format a token count into a human-readable short string.
Format offload retention settings into a human-readable limit string.
Write messages to backend storage before offloading.
Appends messages as a timestamped markdown section to the conversation
history file, matching the SummarizationMiddleware offload pattern.
Filters out prior summary messages using the middleware's
_filter_summary_messages to avoid storing summaries-of-summaries.
Execute the offload workflow: summarize old messages and free context.