CreateAgentParamsThe schema of the middleware context. Middleware context is read-only and not persisted between multiple invocations. It can be either:
A description of the tool.
Use to specify how to expose the agent name to the underlying supervisor LLM.
undefined: Relies on the LLM provider AIMessage#name. Currently, only OpenAI supports this."inline": Add the agent name directly into the content field of the AIMessage using XML-style tags.
Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>"Middleware instances to run during agent execution. Each middleware can define its own state schema and hook into the agent lifecycle.
The name of the tool being called
The tool response format.
If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.
Abort signal for this call. If provided, the call will be aborted when the signal is aborted.
The schema of the middleware state. Middleware state is persisted between multiple invocations. It can be either:
Determines the version of the graph to create.
Can be one of
"v1": The tool node processes the full AIMessage containing all tool calls. All tool
calls are executed concurrently via Promise.all inside a single graph node.
Choose v1 when your tools invoke sub-graphs or other long-running async work
and you need true parallelism ā the Promise.all approach is unaffected by
LangGraph's per-task checkpoint serialisation.
"v2": Each tool call is dispatched as an independent graph task using the Send API.
Tasks are scheduled in parallel by LangGraph, but when tools invoke sub-graphs
the underlying checkpoint writes can cause effective serialisation, making
concurrent tool calls execute sequentially. v2 is the better choice when you
need per-tool-call checkpointing, independent fault isolation, or interrupt()
support inside individual tool calls.