Tool execution node for LangGraph workflows.
This module provides prebuilt functionality for executing tools in LangGraph.
Tools are functions that models can call to interact with external systems, APIs, databases, or perform computations.
The module implements design patterns for:
Key Components:
ToolNode: Main class for executing tools in LangGraph workflowsInjectedState: Annotation for injecting graph state into toolsInjectedStore: Annotation for injecting persistent store into toolsToolRuntime: Runtime information for tools, bundling together state, context,
config, stream_writer, tool_call_id, and storetools_condition: Utility function for conditional routing based on tool callsTypical Usage:
from langchain_core.tools import tool
from langchain.tools import ToolNode
@tool
def my_tool(x: int) -> str:
return f"Result: {x}"
tool_node = ToolNode([my_tool])Wrapper for tool call execution with multi-call support.
Async wrapper for tool call execution with multi-call support.
Convert tool output to ToolMessage content format.
Handles str, list[dict] (content blocks), and arbitrary objects by attempting
JSON serialization with fallback to str().
Conditional routing function for tool-calling workflows.
This utility function implements the standard conditional logic for ReAct-style
agents: if the last AIMessage contains tool calls, route to the tool execution
node; otherwise, end the workflow. This pattern is fundamental to most tool-calling
agent architectures.
The function handles multiple state formats commonly used in LangGraph applications, making it flexible for different graph designs while maintaining consistent behavior.
Tool execution request passed to tool call interceptors.
ToolCall with additional context for graph state.
This is an internal data structure meant to help the ToolNode accept
tool calls with additional context (e.g. state) when dispatched using the
Send API.
The Send API is used in create_agent to distribute tool calls in parallel and support human-in-the-loop workflows where graph execution may be paused for an indefinite time.
An error occurred while invoking a tool due to invalid arguments.
This exception is only raised when invoking a tool using the ToolNode!
A node for executing tools in LangGraph workflows.
Handles tool execution patterns including function calls, state injection, persistent storage, and control flow. Manages parallel execution, error handling.
Use ToolNode when building custom workflows that require fine-grained control over
tool execution—for example, custom routing logic, specialized error handling, or
non-standard agent architectures.
For standard ReAct-style agents, use create_agent
instead. It uses ToolNode internally with sensible defaults for the agent loop,
conditional routing, and error handling.
Runtime context automatically injected into tools.
This is distinct from Runtime (from langgraph.runtime), which is injected
into graph nodes and middleware. ToolRuntime includes additional tool-specific
attributes like config, state, and tool_call_id that Runtime does not
have.
When a tool function has a parameter named runtime with type hint
ToolRuntime, the tool execution system will automatically inject an instance
containing:
state: The current graph statetool_call_id: The ID of the current tool callconfig: RunnableConfig for the current executioncontext: Runtime context (shared with Runtime)store: BaseStore instance for persistent storage (shared with Runtime)stream_writer: StreamWriter for streaming output (shared with Runtime)No Annotated wrapper is needed - just use runtime: ToolRuntime
as a parameter.
Annotation for injecting graph state into tool arguments.
This annotation enables tools to access graph state without exposing state
management details to the language model. Tools annotated with InjectedState
receive state data automatically during execution while remaining invisible
to the model's tool-calling interface.
Annotation for injecting persistent store into tool arguments.
This annotation enables tools to access LangGraph's persistent storage system
without exposing storage details to the language model. Tools annotated with
InjectedStore receive the store instance automatically during execution while
remaining invisible to the model's tool-calling interface.
The store provides persistent, cross-session data storage that tools can use for maintaining context, user preferences, or any other data that needs to persist beyond individual workflow executions.
InjectedStore annotation requires langchain-core >= 0.3.8