Agents¶
langgraph.prebuilt exposes a higher-level API for creating and executing agents and tools.
Modules:
Name | Description |
---|---|
middleware |
Middleware plugins for agents. |
middleware_agent |
Middleware agent implementation. |
react_agent |
React agent implementation. |
structured_output |
Types for setting agent response formats. |
Classes:
Name | Description |
---|---|
AgentState |
The state of the agent. |
ToolNode |
A node for executing tools in LangGraph workflows. |
Functions:
Name | Description |
---|---|
create_agent |
Creates an agent graph that calls tools in a loop until a stopping condition is met. |
AgentState
¶
Bases: TypedDict
The state of the agent.
ToolNode
¶
Bases: RunnableCallable
A node for executing tools in LangGraph workflows.
Handles tool execution patterns including function calls, state injection, persistent storage, and control flow. Manages parallel execution, error handling.
Input Formats
- Graph state with
messages
key that has a list of messages: - Common representation for agentic workflows
-
Supports custom messages key via
messages_key
parameter -
Message List:
[AIMessage(..., tool_calls=[...])]
-
List of messages with tool calls in the last AIMessage
-
Direct Tool Calls:
[{"name": "tool", "args": {...}, "id": "1", "type": "tool_call"}]
- Bypasses message parsing for direct tool execution
- For programmatic tool invocation and testing
Output Formats
Output format depends on input type and tool behavior:
For Regular tools:
- Dict input → {"messages": [ToolMessage(...)]}
- List input → [ToolMessage(...)]
For Command tools:
- Returns [Command(...)]
or mixed list with regular tool outputs
- Commands can update state, trigger navigation, or send messages
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tools
|
Sequence[BaseTool | Callable]
|
A sequence of tools that can be invoked by this node. Supports: - BaseTool instances: Tools with schemas and metadata - Plain functions: Automatically converted to tools with inferred schemas |
required |
name
|
str
|
The name identifier for this node in the graph. Used for debugging and visualization. Defaults to "tools". |
'tools'
|
tags
|
list[str] | None
|
Optional metadata tags to associate with the node for filtering and organization. Defaults to None. |
None
|
handle_tool_errors
|
bool | str | Callable[..., str] | type[Exception] | tuple[type[Exception], ...]
|
Configuration for error handling during tool execution. Supports multiple strategies:
Defaults to a callable that: - catches tool invocation errors (due to invalid arguments provided by the model) and returns a descriptive error message - ignores tool execution errors (they will be re-raised) |
_default_handle_tool_errors
|
messages_key
|
str
|
The key in the state dictionary that contains the message list. This same key will be used for the output ToolMessages. Defaults to "messages". Allows custom state schemas with different message field names. |
'messages'
|
Examples:
Basic usage:
from langchain.tools import ToolNode
from langchain_core.tools import tool
@tool
def calculator(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
tool_node = ToolNode([calculator])
State injection:
from typing_extensions import Annotated
from langchain.tools import InjectedState
@tool
def context_tool(query: str, state: Annotated[dict, InjectedState]) -> str:
"""Some tool that uses state."""
return f"Query: {query}, Messages: {len(state['messages'])}"
tool_node = ToolNode([context_tool])
Error handling:
def handle_errors(e: ValueError) -> str:
return "Invalid input provided"
tool_node = ToolNode([my_tool], handle_tool_errors=handle_errors)
Methods:
Name | Description |
---|---|
__init__ |
Initialize the ToolNode with the provided tools and configuration. |
inject_tool_args |
Inject graph state and store into tool call arguments. |
Attributes:
Name | Type | Description |
---|---|---|
tools_by_name |
dict[str, BaseTool]
|
Mapping from tool name to BaseTool instance. |
tools_by_name
property
¶
Mapping from tool name to BaseTool instance.
__init__
¶
__init__(
tools: Sequence[BaseTool | Callable],
*,
name: str = "tools",
tags: list[str] | None = None,
handle_tool_errors: bool
| str
| Callable[..., str]
| type[Exception]
| tuple[
type[Exception], ...
] = _default_handle_tool_errors,
messages_key: str = "messages",
) -> None
Initialize the ToolNode with the provided tools and configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tools
|
Sequence[BaseTool | Callable]
|
Sequence of tools to make available for execution. |
required |
name
|
str
|
Node name for graph identification. |
'tools'
|
tags
|
list[str] | None
|
Optional metadata tags. |
None
|
handle_tool_errors
|
bool | str | Callable[..., str] | type[Exception] | tuple[type[Exception], ...]
|
Error handling configuration. |
_default_handle_tool_errors
|
messages_key
|
str
|
State key containing messages. |
'messages'
|
inject_tool_args
¶
inject_tool_args(
tool_call: ToolCall,
input: list[AnyMessage] | dict[str, Any] | BaseModel,
store: BaseStore | None,
) -> ToolCall
Inject graph state and store into tool call arguments.
This method enables tools to access graph context that should not be controlled by the model. Tools can declare dependencies on graph state or persistent storage using InjectedState and InjectedStore annotations. This method automatically identifies these dependencies and injects the appropriate values.
The injection process preserves the original tool call structure while adding the necessary context arguments. This allows tools to be both model-callable and context-aware without exposing internal state management to the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tool_call
|
ToolCall
|
The tool call dictionary to augment with injected arguments. Must contain 'name', 'args', 'id', and 'type' fields. |
required |
input
|
list[AnyMessage] | dict[str, Any] | BaseModel
|
The current graph state to inject into tools requiring state access. Can be a message list, state dictionary, or BaseModel instance. |
required |
store
|
BaseStore | None
|
The persistent store instance to inject into tools requiring storage. Will be None if no store is configured for the graph. |
required |
Returns:
Type | Description |
---|---|
ToolCall
|
A new ToolCall dictionary with the same structure as the input but with |
ToolCall
|
additional arguments injected based on the tool's annotation requirements. |
Raises:
Type | Description |
---|---|
ValueError
|
If a tool requires store injection but no store is provided, or if state injection requirements cannot be satisfied. |
Note
This method is automatically called during tool execution but can also be used manually when working with the Send API or custom routing logic. The injection is performed on a copy of the tool call to avoid mutating the original.
create_agent
¶
create_agent(
model: str
| BaseChatModel
| SyncOrAsync[
[StateT, Runtime[ContextT]], BaseChatModel
],
tools: Sequence[BaseTool | Callable | dict[str, Any]]
| ToolNode,
*,
middleware: Sequence[AgentMiddleware] = (),
prompt: Prompt | None = None,
response_format: ToolStrategy[StructuredResponseT]
| ProviderStrategy[StructuredResponseT]
| type[StructuredResponseT]
| None = None,
pre_model_hook: RunnableLike | None = None,
post_model_hook: RunnableLike | None = None,
state_schema: type[StateT] | None = None,
context_schema: type[ContextT] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
version: Literal["v1", "v2"] = "v2",
name: str | None = None,
**deprecated_kwargs: Any,
) -> CompiledStateGraph[StateT, ContextT]
Creates an agent graph that calls tools in a loop until a stopping condition is met.
For more details on using create_agent
,
visit Agents documentation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str | BaseChatModel | SyncOrAsync[[StateT, Runtime[ContextT]], BaseChatModel]
|
The language model for the agent. Supports static and dynamic model selection.
Dynamic functions receive graph state and runtime, enabling
context-dependent model selection. Must return a Dynamic model example:
.. note::
Ensure returned models have appropriate tools bound via
|
required |
tools
|
Sequence[BaseTool | Callable | dict[str, Any]] | ToolNode
|
A list of tools or a ToolNode instance. If an empty list is provided, the agent will consist of a single LLM node without tool calling. |
required |
prompt
|
Prompt | None
|
An optional prompt for the LLM. Can take a few different forms:
|
None
|
response_format
|
ToolStrategy[StructuredResponseT] | ProviderStrategy[StructuredResponseT] | type[StructuredResponseT] | None
|
An optional UsingToolStrategy configuration for structured responses. If provided, the agent will handle structured output via tool calls
during the normal conversation flow.
When the model calls a structured output tool, the response will be captured
and returned in the 'structured_response' state key.
If not provided, The UsingToolStrategy should contain:
Each ResponseSchema contains:
.. important::
.. note:: Structured responses are handled directly in the model call node via tool calls, eliminating the need for separate structured response nodes. |
None
|
pre_model_hook
|
RunnableLike | None
|
An optional node to add before the
.. important::
At least one of .. warning::
If you are returning
|
None
|
post_model_hook
|
RunnableLike | None
|
An optional node to add after the .. note::
Only available with |
None
|
state_schema
|
type[StateT] | None
|
An optional state schema that defines graph state.
Must have |
None
|
context_schema
|
type[ContextT] | None
|
An optional schema for runtime context. |
None
|
checkpointer
|
Checkpointer | None
|
An optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation). |
None
|
store
|
BaseStore | None
|
An optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users). |
None
|
interrupt_before
|
list[str] | None
|
An optional list of node names to interrupt before. Should be one of the following: "agent", "tools". This is useful if you want to add a user confirmation or other interrupt before taking an action. |
None
|
interrupt_after
|
list[str] | None
|
An optional list of node names to interrupt after. Should be one of the following: "agent", "tools". This is useful if you want to return directly or run additional processing on an output. |
None
|
debug
|
bool
|
A flag indicating whether to enable debug mode. |
False
|
version
|
Literal['v1', 'v2']
|
Determines the version of the graph to create. Can be one of:
|
'v2'
|
name
|
str | None
|
An optional name for the CompiledStateGraph. This name will be automatically used when adding ReAct agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems. |
None
|
.. warning::
The config_schema
parameter is deprecated in v0.6.0 and support will be removed in v2.0.0.
Please use context_schema
instead to specify the schema for run-scoped context.
Returns:
Type | Description |
---|---|
CompiledStateGraph[StateT, ContextT]
|
A compiled LangChain runnable that can be used for chat interactions. |
The "agent" node calls the language model with the messages list (after applying the prompt).
If the resulting AIMessage contains tool_calls
,
the graph will then call the "tools".
The "tools" node executes the tools (1 tool per tool_call
)
and adds the responses to the messages list as ToolMessage
objects.
The agent node then calls the language model again.
The process repeats until no more tool_calls
are present in the response.
The agent then returns the full list of messages as a dictionary containing the key "messages".
sequenceDiagram
participant U as User
participant A as LLM
participant T as Tools
U->>A: Initial input
Note over A: Prompt + LLM
loop while tool_calls present
A->>T: Execute tools
T-->>A: ToolMessage for each tool_calls
end
A->>U: Return final state
Example
from langchain.agents import create_agent
def check_weather(location: str) -> str:
'''Return the weather forecast for the specified location.'''
return f"It's always sunny in {location}"
graph = create_agent(
"anthropic:claude-3-7-sonnet-latest",
tools=[check_weather],
prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)