An optional checkpoint saver to persist the agent's state.
The schema of the middleware context. Middleware context is read-only and not persisted between multiple invocations. It can be either:
A description of the tool.
Use to specify how to expose the agent name to the underlying supervisor LLM.
undefined: Relies on the LLM provider AIMessage#name. Currently, only OpenAI supports this."inline": Add the agent name directly into the content field of the AIMessage using XML-style tags.
Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>"Middleware instances to run during agent execution. Each middleware can define its own state schema and hook into the agent lifecycle.
The name of the tool being called
The tool response format.
If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.
Abort signal for this call. If provided, the call will be aborted when the signal is aborted.
The schema of the middleware state. Middleware state is persisted between multiple invocations. It can be either:
An optional store to persist the agent's state.
Determines the version of the graph to create.
Can be one of
"v1": The tool node processes a single message. All tool calls in the message are
executed in parallel within the tool node."v2": The tool node processes a single tool call. Tool calls are distributed across
multiple instances of the tool node using the Send API.The system message string for this step.