Uses an LLM to select relevant tools before calling the main model.
When an agent has many tools available, this middleware filters them down to only the most relevant ones for the user's query. This reduces token usage and helps the main model focus on the right tools.
LLMToolSelectorMiddleware(
self,
*,
model: str | BaseChatModel | None = None,
system_prompt: str = DEFAULT_SYSTEM_PROMPT,
max_tools: int | None = None,
always_include: list[str] | None = None
)| Name | Type | Description |
|---|---|---|
model | str | BaseChatModel | None | Default: NoneModel to use for selection. If not provided, uses the agent's main model. Can be a model identifier string or |
system_prompt | str | Default: DEFAULT_SYSTEM_PROMPTInstructions for the selection model. |
max_tools | int | None | Default: NoneMaximum number of tools to select. If the model selects more, only the first If not specified, there is no limit. |
always_include | list[str] | None | Default: NoneTool names to always include regardless of selection. These do not count against the |
Logic to run before the agent execution starts.
Async logic to run before the agent execution starts.
Logic to run before the model is called.
Async logic to run before the model is called.
Logic to run after the model is called.
Async logic to run after the model is called.
Logic to run after the agent execution completes.
Async logic to run after the agent execution completes.
Intercept tool execution for retries, monitoring, or modification.
Intercept and control async tool execution via handler callback.