Agents
langgraph.prebuilt.chat_agent_executor
¶
| FUNCTION | DESCRIPTION |
|---|---|
create_react_agent |
Creates an agent graph that calls tools in a loop until a stopping condition is met. |
AgentState
deprecated
¶
Bases: TypedDict
Deprecated
AgentState has been moved to langchain.agents. Please update your import to from langchain.agents import AgentState.
The state of the agent.
create_react_agent
deprecated
¶
create_react_agent(
model: str
| LanguageModelLike
| Callable[[StateSchema, Runtime[ContextT]], BaseChatModel]
| Callable[[StateSchema, Runtime[ContextT]], Awaitable[BaseChatModel]]
| Callable[
[StateSchema, Runtime[ContextT]], Runnable[LanguageModelInput, BaseMessage]
]
| Callable[
[StateSchema, Runtime[ContextT]],
Awaitable[Runnable[LanguageModelInput, BaseMessage]],
],
tools: Sequence[BaseTool | Callable | dict[str, Any]] | ToolNode,
*,
prompt: Prompt | None = None,
response_format: StructuredResponseSchema
| tuple[str, StructuredResponseSchema]
| None = None,
pre_model_hook: RunnableLike | None = None,
post_model_hook: RunnableLike | None = None,
state_schema: StateSchemaType | None = None,
context_schema: type[Any] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
version: Literal["v1", "v2"] = "v2",
name: str | None = None,
**deprecated_kwargs: Any,
) -> CompiledStateGraph
Deprecated
create_react_agent has been moved to langchain.agents. Please update your import to from langchain.agents import create_agent.
Creates an agent graph that calls tools in a loop until a stopping condition is met.
For more details on using create_react_agent, visit Agents documentation.
| PARAMETER | DESCRIPTION |
|---|---|
model
|
The language model for the agent. Supports static and dynamic model selection.
Dynamic functions receive graph state and runtime, enabling
context-dependent model selection. Must return a Dynamic model Dynamic Model Requirements Ensure returned models have appropriate tools bound via
TYPE:
|
tools
|
A list of tools or a
TYPE:
|
prompt
|
An optional prompt for the LLM. Can take a few different forms:
TYPE:
|
response_format
|
An optional schema for the final agent output. If provided, output will be formatted to match the given schema and returned in the 'structured_response' state key. If not provided, Can be passed in as:
Important
Note The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide.
TYPE:
|
pre_model_hook
|
An optional node to add before the Important At least one of
TYPE:
|
post_model_hook
|
An optional node to add after the Note Only available with
TYPE:
|
state_schema
|
An optional state schema that defines graph state.
Must have Note
TYPE:
|
context_schema
|
An optional schema for runtime context. |
checkpointer
|
An optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).
TYPE:
|
store
|
An optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users).
TYPE:
|
interrupt_before
|
An optional list of node names to interrupt before.
Should be one of the following: This is useful if you want to add a user confirmation or other interrupt before taking an action. |
interrupt_after
|
An optional list of node names to interrupt after.
Should be one of the following: This is useful if you want to return directly or run additional processing on an output. |
debug
|
A flag indicating whether to enable debug mode.
TYPE:
|
version
|
Determines the version of the graph to create. Can be one of:
TYPE:
|
name
|
An optional name for the
TYPE:
|
config_schema Deprecated
The config_schema parameter is deprecated in v0.6.0 and support will be removed in v2.0.0.
Please use context_schema instead to specify the schema for run-scoped context.
| RETURNS | DESCRIPTION |
|---|---|
CompiledStateGraph
|
A compiled LangChain |
The "agent" node calls the language model with the messages list (after applying the prompt).
If the resulting AIMessage contains tool_calls, the graph will then call the "tools".
The "tools" node executes the tools (1 tool per tool_call) and adds the responses to the messages list
as ToolMessage objects. The agent node then calls the language model again.
The process repeats until no more tool_calls are present in the response.
The agent then returns the full list of messages as a dictionary containing the key 'messages'.
sequenceDiagram
participant U as User
participant A as LLM
participant T as Tools
U->>A: Initial input
Note over A: Prompt + LLM
loop while tool_calls present
A->>T: Execute tools
T-->>A: ToolMessage for each tool_calls
end
A->>U: Return final state
Example
from langgraph.prebuilt import create_react_agent
def check_weather(location: str) -> str:
'''Return the weather forecast for the specified location.'''
return f"It's always sunny in {location}"
graph = create_react_agent(
"anthropic:claude-3-7-sonnet-latest",
tools=[check_weather],
prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)
langgraph.prebuilt.tool_node.ToolNode
¶
Bases: RunnableCallable
A node for executing tools in LangGraph workflows.
Handles tool execution patterns including function calls, state injection, persistent storage, and control flow. Manages parallel execution, error handling.
Input Formats
-
Graph state with
messageskey that has a list of messages:- Common representation for agentic workflows
- Supports custom messages key via
messages_keyparameter
-
Message List:
[AIMessage(..., tool_calls=[...])]- List of messages with tool calls in the last AIMessage
-
Direct Tool Calls:
[{"name": "tool", "args": {...}, "id": "1", "type": "tool_call"}]- Bypasses message parsing for direct tool execution
- For programmatic tool invocation and testing
Output Formats
Output format depends on input type and tool behavior:
For Regular tools:
- Dict input →
{"messages": [ToolMessage(...)]} - List input →
[ToolMessage(...)]
For Command tools:
- Returns
[Command(...)]or mixed list with regular tool outputs Commandcan update state, trigger navigation, or send messages
| PARAMETER | DESCRIPTION |
|---|---|
tools
|
A sequence of tools that can be invoked by this node. Supports:
|
name
|
The name identifier for this node in the graph. Used for debugging and visualization.
TYPE:
|
tags
|
Optional metadata tags to associate with the node for filtering and organization. |
handle_tool_errors
|
Configuration for error handling during tool execution. Supports multiple strategies:
Defaults to a callable that:
TYPE:
|
messages_key
|
The key in the state dictionary that contains the message list.
This same key will be used for the output Allows custom state schemas with different message field names.
TYPE:
|
Examples:
Basic usage:
from langchain.tools import ToolNode
from langchain_core.tools import tool
@tool
def calculator(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
tool_node = ToolNode([calculator])
State injection:
from typing_extensions import Annotated
from langchain.tools import InjectedState
@tool
def context_tool(query: str, state: Annotated[dict, InjectedState]) -> str:
"""Some tool that uses state."""
return f"Query: {query}, Messages: {len(state['messages'])}"
tool_node = ToolNode([context_tool])
Error handling:
def handle_errors(e: ValueError) -> str:
return "Invalid input provided"
tool_node = ToolNode([my_tool], handle_tool_errors=handle_errors)
langgraph.prebuilt.tool_node
¶
Tool execution node for LangGraph workflows.
This module provides prebuilt functionality for executing tools in LangGraph.
Tools are functions that models can call to interact with external systems, APIs, databases, or perform computations.
The module implements design patterns for:
- Parallel execution of multiple tool calls for efficiency
- Robust error handling with customizable error messages
- State injection for tools that need access to graph state
- Store injection for tools that need persistent storage
- Command-based state updates for advanced control flow
Key Components:
ToolNode: Main class for executing tools in LangGraph workflowsInjectedState: Annotation for injecting graph state into toolsInjectedStore: Annotation for injecting persistent store into toolsToolRuntime: Runtime information for tools, bundling togetherstate,context,config,stream_writer,tool_call_id, andstoretools_condition: Utility function for conditional routing based on tool calls
Typical Usage
| FUNCTION | DESCRIPTION |
|---|---|
tools_condition |
Conditional routing function for tool-calling workflows. |
InjectedState
¶
Bases: InjectedToolArg
Annotation for injecting graph state into tool arguments.
This annotation enables tools to access graph state without exposing state
management details to the language model. Tools annotated with InjectedState
receive state data automatically during execution while remaining invisible
to the model's tool-calling interface.
| PARAMETER | DESCRIPTION |
|---|---|
field
|
Optional key to extract from the state dictionary. If
TYPE:
|
Example
from typing import List
from typing_extensions import Annotated, TypedDict
from langchain_core.messages import BaseMessage, AIMessage
from langchain.tools import InjectedState, ToolNode, tool
class AgentState(TypedDict):
messages: List[BaseMessage]
foo: str
@tool
def state_tool(x: int, state: Annotated[dict, InjectedState]) -> str:
'''Do something with state.'''
if len(state["messages"]) > 2:
return state["foo"] + str(x)
else:
return "not enough messages"
@tool
def foo_tool(x: int, foo: Annotated[str, InjectedState("foo")]) -> str:
'''Do something else with state.'''
return foo + str(x + 1)
node = ToolNode([state_tool, foo_tool])
tool_call1 = {"name": "state_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
tool_call2 = {"name": "foo_tool", "args": {"x": 1}, "id": "2", "type": "tool_call"}
state = {
"messages": [AIMessage("", tool_calls=[tool_call1, tool_call2])],
"foo": "bar",
}
node.invoke(state)
Note
InjectedStatearguments are automatically excluded from tool schemas presented to language modelsToolNodehandles the injection process during execution- Tools can mix regular arguments (controlled by the model) with injected arguments (controlled by the system)
- State injection occurs after the model generates tool calls but before tool execution
| METHOD | DESCRIPTION |
|---|---|
__init__ |
Initialize the |
InjectedStore
¶
Bases: InjectedToolArg
Annotation for injecting persistent store into tool arguments.
This annotation enables tools to access LangGraph's persistent storage system
without exposing storage details to the language model. Tools annotated with
InjectedStore receive the store instance automatically during execution while
remaining invisible to the model's tool-calling interface.
The store provides persistent, cross-session data storage that tools can use for maintaining context, user preferences, or any other data that needs to persist beyond individual workflow executions.
Warning
InjectedStore annotation requires langchain-core >= 0.3.8
Example
from typing_extensions import Annotated
from langgraph.store.memory import InMemoryStore
from langchain.tools import InjectedStore, ToolNode, tool
@tool
def save_preference(
key: str,
value: str,
store: Annotated[Any, InjectedStore()]
) -> str:
"""Save user preference to persistent storage."""
store.put(("preferences",), key, value)
return f"Saved {key} = {value}"
@tool
def get_preference(
key: str,
store: Annotated[Any, InjectedStore()]
) -> str:
"""Retrieve user preference from persistent storage."""
result = store.get(("preferences",), key)
return result.value if result else "Not found"
Usage with ToolNode and graph compilation:
from langgraph.graph import StateGraph
from langgraph.store.memory import InMemoryStore
store = InMemoryStore()
tool_node = ToolNode([save_preference, get_preference])
graph = StateGraph(State)
graph.add_node("tools", tool_node)
compiled_graph = graph.compile(store=store) # Store is injected automatically
Cross-session persistence:
Note
InjectedStorearguments are automatically excluded from tool schemas presented to language models- The store instance is automatically injected by
ToolNodeduring execution - Tools can access namespaced storage using the store's get/put methods
- Store injection requires the graph to be compiled with a store instance
- Multiple tools can share the same store instance for data consistency
tools_condition
¶
tools_condition(
state: list[AnyMessage] | dict[str, Any] | BaseModel, messages_key: str = "messages"
) -> Literal["tools", "__end__"]
Conditional routing function for tool-calling workflows.
This utility function implements the standard conditional logic for ReAct-style
agents: if the last AIMessage contains tool calls, route to the tool execution
node; otherwise, end the workflow. This pattern is fundamental to most tool-calling
agent architectures.
The function handles multiple state formats commonly used in LangGraph applications, making it flexible for different graph designs while maintaining consistent behavior.
| PARAMETER | DESCRIPTION |
|---|---|
state
|
The current graph state to examine for tool calls. Supported formats:
- Dictionary containing a messages key (for |
messages_key
|
The key or attribute name containing the message list in the state. This allows customization for graphs using different state schemas.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Literal['tools', '__end__']
|
Either |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If no messages can be found in the provided state format. |
Example
Basic usage in a ReAct agent:
from langgraph.graph import StateGraph
from langchain.tools import ToolNode
from langchain.tools.tool_node import tools_condition
from typing_extensions import TypedDict
class State(TypedDict):
messages: list
graph = StateGraph(State)
graph.add_node("llm", call_model)
graph.add_node("tools", ToolNode([my_tool]))
graph.add_conditional_edges(
"llm",
tools_condition, # Routes to "tools" or "__end__"
{"tools": "tools", "__end__": "__end__"},
)
Custom messages key:
Note
This function is designed to work seamlessly with ToolNode and standard
LangGraph patterns. It expects the last message to be an AIMessage when
tool calls are present, which is the standard output format for tool-calling
language models.
langgraph.prebuilt.tool_validator.ValidationNode
deprecated
¶
Bases: RunnableCallable
Deprecated
ValidationNode is deprecated. Please use create_agent from langchain.agents with custom tool error handling.
A node that validates all tools requests from the last AIMessage.
It can be used either in StateGraph with a 'messages' key.
Note
This node does not actually run the tools, it only validates the tool calls, which is useful for extraction and other use cases where you need to generate structured output that conforms to a complex schema without losing the original messages and tool IDs (for use in multi-turn conversations).
| RETURNS | DESCRIPTION |
|---|---|
Union[Dict[str, List[ToolMessage]], Sequence[ToolMessage]]
|
A list of
|
Example
from typing import Literal, Annotated
from typing_extensions import TypedDict
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, field_validator
from langgraph.graph import END, START, StateGraph
from langgraph.prebuilt import ValidationNode
from langgraph.graph.message import add_messages
class SelectNumber(BaseModel):
a: int
@field_validator("a")
def a_must_be_meaningful(cls, v):
if v != 37:
raise ValueError("Only 37 is allowed")
return v
builder = StateGraph(Annotated[list, add_messages])
llm = ChatAnthropic(model="claude-3-5-haiku-latest").bind_tools([SelectNumber])
builder.add_node("model", llm)
builder.add_node("validation", ValidationNode([SelectNumber]))
builder.add_edge(START, "model")
def should_validate(state: list) -> Literal["validation", "__end__"]:
if state[-1].tool_calls:
return "validation"
return END
builder.add_conditional_edges("model", should_validate)
def should_reprompt(state: list) -> Literal["model", "__end__"]:
for msg in state[::-1]:
# None of the tool calls were errors
if msg.type == "ai":
return END
if msg.additional_kwargs.get("is_error"):
return "model"
return END
builder.add_conditional_edges("validation", should_reprompt)
graph = builder.compile()
res = graph.invoke(("user", "Select a number, any number"))
# Show the retry logic
for msg in res:
msg.pretty_print()
langgraph.prebuilt.interrupt
¶
HumanInterruptConfig
deprecated
¶
Bases: TypedDict
Deprecated
HumanInterruptConfig has been moved to langchain.agents.interrupt. Please update your import to from langchain.agents.interrupt import HumanInterruptConfig.
Configuration that defines what actions are allowed for a human interrupt.
This controls the available interaction options when the graph is paused for human input.
| ATTRIBUTE | DESCRIPTION |
|---|---|
allow_ignore |
Whether the human can choose to ignore/skip the current step
|
allow_respond |
Whether the human can provide a text response/feedback
|
allow_edit |
Whether the human can edit the provided content/state
|
allow_accept |
Whether the human can accept/approve the current state
|
ActionRequest
deprecated
¶
Bases: TypedDict
Deprecated
ActionRequest has been moved to langchain.agents.interrupt. Please update your import to from langchain.agents.interrupt import ActionRequest.
Represents a request for human action within the graph execution.
Contains the action type and any associated arguments needed for the action.
| ATTRIBUTE | DESCRIPTION |
|---|---|
action |
The type or name of action being requested (e.g.,
|
args |
Key-value pairs of arguments needed for the action
|
HumanInterrupt
deprecated
¶
Bases: TypedDict
Deprecated
HumanInterrupt has been moved to langchain.agents.interrupt. Please update your import to from langchain.agents.interrupt import HumanInterrupt.
Represents an interrupt triggered by the graph that requires human intervention.
This is passed to the interrupt function when execution is paused for human input.
| ATTRIBUTE | DESCRIPTION |
|---|---|
action_request |
The specific action being requested from the human
|
config |
Configuration defining what actions are allowed
|
description |
Optional detailed description of what input is needed
|
Example
# Extract a tool call from the state and create an interrupt request
request = HumanInterrupt(
action_request=ActionRequest(
action="run_command", # The action being requested
args={"command": "ls", "args": ["-l"]} # Arguments for the action
),
config=HumanInterruptConfig(
allow_ignore=True, # Allow skipping this step
allow_respond=True, # Allow text feedback
allow_edit=False, # Don't allow editing
allow_accept=True # Allow direct acceptance
),
description="Please review the command before execution"
)
# Send the interrupt request and get the response
response = interrupt([request])[0]
HumanResponse
¶
Bases: TypedDict
The response provided by a human to an interrupt, which is returned when graph execution resumes.
| ATTRIBUTE | DESCRIPTION |
|---|---|
type |
The type of response:
TYPE:
|
args |
The response payload:
TYPE:
|