Skip to content

Agents

langgraph.prebuilt exposes a higher-level API for creating and executing agents and tools.

Modules:

Name Description
middleware

Middleware plugins for agents.

middleware_agent

Middleware agent implementation.

react_agent

React agent implementation.

structured_output

Types for setting agent response formats.

Classes:

Name Description
AgentState

The state of the agent.

ToolNode

A node for executing tools in LangGraph workflows.

Functions:

Name Description
create_agent

Creates an agent graph that calls tools in a loop until a stopping condition is met.

AgentState

Bases: TypedDict

The state of the agent.

ToolNode

Bases: RunnableCallable

A node for executing tools in LangGraph workflows.

Handles tool execution patterns including function calls, state injection, persistent storage, and control flow. Manages parallel execution, error handling.

Input Formats
  1. Graph state with messages key that has a list of messages:
  2. Common representation for agentic workflows
  3. Supports custom messages key via messages_key parameter

  4. Message List: [AIMessage(..., tool_calls=[...])]

  5. List of messages with tool calls in the last AIMessage

  6. Direct Tool Calls: [{"name": "tool", "args": {...}, "id": "1", "type": "tool_call"}]

  7. Bypasses message parsing for direct tool execution
  8. For programmatic tool invocation and testing
Output Formats

Output format depends on input type and tool behavior:

For Regular tools: - Dict input → {"messages": [ToolMessage(...)]} - List input → [ToolMessage(...)]

For Command tools: - Returns [Command(...)] or mixed list with regular tool outputs - Commands can update state, trigger navigation, or send messages

Parameters:

Name Type Description Default
tools Sequence[BaseTool | Callable]

A sequence of tools that can be invoked by this node. Supports: - BaseTool instances: Tools with schemas and metadata - Plain functions: Automatically converted to tools with inferred schemas

required
name str

The name identifier for this node in the graph. Used for debugging and visualization. Defaults to "tools".

'tools'
tags list[str] | None

Optional metadata tags to associate with the node for filtering and organization. Defaults to None.

None
handle_tool_errors bool | str | Callable[..., str] | type[Exception] | tuple[type[Exception], ...]

Configuration for error handling during tool execution. Supports multiple strategies:

  • True: Catch all errors and return a ToolMessage with the default error template containing the exception details.
  • str: Catch all errors and return a ToolMessage with this custom error message string.
  • type[Exception]: Only catch exceptions with the specified type and return the default error message for it.
  • tuple[type[Exception], ...]: Only catch exceptions with the specified types and return default error messages for them.
  • Callable[..., str]: Catch exceptions matching the callable's signature and return the string result of calling it with the exception.
  • False: Disable error handling entirely, allowing exceptions to propagate.

Defaults to a callable that: - catches tool invocation errors (due to invalid arguments provided by the model) and returns a descriptive error message - ignores tool execution errors (they will be re-raised)

_default_handle_tool_errors
messages_key str

The key in the state dictionary that contains the message list. This same key will be used for the output ToolMessages. Defaults to "messages". Allows custom state schemas with different message field names.

'messages'

Examples:

Basic usage:

from langchain.tools import ToolNode
from langchain_core.tools import tool

@tool
def calculator(a: int, b: int) -> int:
    """Add two numbers."""
    return a + b

tool_node = ToolNode([calculator])

State injection:

from typing_extensions import Annotated
from langchain.tools import InjectedState

@tool
def context_tool(query: str, state: Annotated[dict, InjectedState]) -> str:
    """Some tool that uses state."""
    return f"Query: {query}, Messages: {len(state['messages'])}"

tool_node = ToolNode([context_tool])

Error handling:

def handle_errors(e: ValueError) -> str:
    return "Invalid input provided"


tool_node = ToolNode([my_tool], handle_tool_errors=handle_errors)

Methods:

Name Description
__init__

Initialize the ToolNode with the provided tools and configuration.

inject_tool_args

Inject graph state and store into tool call arguments.

Attributes:

Name Type Description
tools_by_name dict[str, BaseTool]

Mapping from tool name to BaseTool instance.

tools_by_name property

tools_by_name: dict[str, BaseTool]

Mapping from tool name to BaseTool instance.

__init__

__init__(
    tools: Sequence[BaseTool | Callable],
    *,
    name: str = "tools",
    tags: list[str] | None = None,
    handle_tool_errors: bool
    | str
    | Callable[..., str]
    | type[Exception]
    | tuple[
        type[Exception], ...
    ] = _default_handle_tool_errors,
    messages_key: str = "messages",
) -> None

Initialize the ToolNode with the provided tools and configuration.

Parameters:

Name Type Description Default
tools Sequence[BaseTool | Callable]

Sequence of tools to make available for execution.

required
name str

Node name for graph identification.

'tools'
tags list[str] | None

Optional metadata tags.

None
handle_tool_errors bool | str | Callable[..., str] | type[Exception] | tuple[type[Exception], ...]

Error handling configuration.

_default_handle_tool_errors
messages_key str

State key containing messages.

'messages'

inject_tool_args

inject_tool_args(
    tool_call: ToolCall,
    input: list[AnyMessage] | dict[str, Any] | BaseModel,
    store: BaseStore | None,
) -> ToolCall

Inject graph state and store into tool call arguments.

This method enables tools to access graph context that should not be controlled by the model. Tools can declare dependencies on graph state or persistent storage using InjectedState and InjectedStore annotations. This method automatically identifies these dependencies and injects the appropriate values.

The injection process preserves the original tool call structure while adding the necessary context arguments. This allows tools to be both model-callable and context-aware without exposing internal state management to the model.

Parameters:

Name Type Description Default
tool_call ToolCall

The tool call dictionary to augment with injected arguments. Must contain 'name', 'args', 'id', and 'type' fields.

required
input list[AnyMessage] | dict[str, Any] | BaseModel

The current graph state to inject into tools requiring state access. Can be a message list, state dictionary, or BaseModel instance.

required
store BaseStore | None

The persistent store instance to inject into tools requiring storage. Will be None if no store is configured for the graph.

required

Returns:

Type Description
ToolCall

A new ToolCall dictionary with the same structure as the input but with

ToolCall

additional arguments injected based on the tool's annotation requirements.

Raises:

Type Description
ValueError

If a tool requires store injection but no store is provided, or if state injection requirements cannot be satisfied.

Note

This method is automatically called during tool execution but can also be used manually when working with the Send API or custom routing logic. The injection is performed on a copy of the tool call to avoid mutating the original.

create_agent

create_agent(
    model: str
    | BaseChatModel
    | SyncOrAsync[
        [StateT, Runtime[ContextT]], BaseChatModel
    ],
    tools: Sequence[BaseTool | Callable | dict[str, Any]]
    | ToolNode,
    *,
    middleware: Sequence[AgentMiddleware] = (),
    prompt: Prompt | None = None,
    response_format: ToolStrategy[StructuredResponseT]
    | ProviderStrategy[StructuredResponseT]
    | type[StructuredResponseT]
    | None = None,
    pre_model_hook: RunnableLike | None = None,
    post_model_hook: RunnableLike | None = None,
    state_schema: type[StateT] | None = None,
    context_schema: type[ContextT] | None = None,
    checkpointer: Checkpointer | None = None,
    store: BaseStore | None = None,
    interrupt_before: list[str] | None = None,
    interrupt_after: list[str] | None = None,
    debug: bool = False,
    version: Literal["v1", "v2"] = "v2",
    name: str | None = None,
    **deprecated_kwargs: Any,
) -> CompiledStateGraph[StateT, ContextT]

Creates an agent graph that calls tools in a loop until a stopping condition is met.

For more details on using create_agent, visit Agents documentation.

Parameters:

Name Type Description Default
model str | BaseChatModel | SyncOrAsync[[StateT, Runtime[ContextT]], BaseChatModel]

The language model for the agent. Supports static and dynamic model selection.

  • Static model: A chat model instance (e.g., ChatOpenAI()) or string identifier (e.g., "openai:gpt-4")
  • Dynamic model: A callable with signature (state, runtime) -> BaseChatModel that returns different models based on runtime context If the model has tools bound via .bind_tools() or other configurations, the return type should be a Runnable[LanguageModelInput, BaseMessage] Coroutines are also supported, allowing for asynchronous model selection.

Dynamic functions receive graph state and runtime, enabling context-dependent model selection. Must return a BaseChatModel instance. For tool calling, bind tools using .bind_tools(). Bound tools must be a subset of the tools parameter.

Dynamic model example:

from dataclasses import dataclass


@dataclass
class ModelContext:
    model_name: str = "gpt-3.5-turbo"


# Instantiate models globally
gpt4_model = ChatOpenAI(model="gpt-4")
gpt35_model = ChatOpenAI(model="gpt-3.5-turbo")


def select_model(state: AgentState, runtime: Runtime[ModelContext]) -> ChatOpenAI:
    model_name = runtime.context.model_name
    model = gpt4_model if model_name == "gpt-4" else gpt35_model
    return model.bind_tools(tools)

.. note:: Ensure returned models have appropriate tools bound via .bind_tools() and support required functionality. Bound tools must be a subset of those specified in the tools parameter.

required
tools Sequence[BaseTool | Callable | dict[str, Any]] | ToolNode

A list of tools or a ToolNode instance. If an empty list is provided, the agent will consist of a single LLM node without tool calling.

required
prompt Prompt | None

An optional prompt for the LLM. Can take a few different forms:

  • str: This is converted to a SystemMessage and added to the beginning of the list of messages in state["messages"].
  • SystemMessage: this is added to the beginning of the list of messages in state["messages"].
  • Callable: This function should take in full graph state and the output is then passed to the language model.
  • Runnable: This runnable should take in full graph state and the output is then passed to the language model.
None
response_format ToolStrategy[StructuredResponseT] | ProviderStrategy[StructuredResponseT] | type[StructuredResponseT] | None

An optional UsingToolStrategy configuration for structured responses.

If provided, the agent will handle structured output via tool calls during the normal conversation flow. When the model calls a structured output tool, the response will be captured and returned in the 'structured_response' state key. If not provided, structured_response will not be present in the output state.

The UsingToolStrategy should contain:

- schemas: A sequence of ResponseSchema objects that define
  the structured output format
- tool_choice: Either "required" or "auto" to control when structured
  output is used

Each ResponseSchema contains:

- schema: A Pydantic model that defines the structure
- name: Optional custom name for the tool (defaults to model name)
- description: Optional custom description (defaults to model docstring)
- strict: Whether to enforce strict validation

.. important:: response_format requires the model to support tool calling

.. note:: Structured responses are handled directly in the model call node via tool calls, eliminating the need for separate structured response nodes.

None
pre_model_hook RunnableLike | None

An optional node to add before the agent node (i.e., the node that calls the LLM). Useful for managing long message histories (e.g., message trimming, summarization, etc.). Pre-model hook must be a callable or a runnable that takes in current graph state and returns a state update in the form of

# At least one of `messages` or `llm_input_messages` MUST be provided
{
    # If provided, will UPDATE the `messages` in the state
    "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES), ...],
    # If provided, will be used as the input to the LLM,
    # and will NOT UPDATE `messages` in the state
    "llm_input_messages": [...],
    # Any other state keys that need to be propagated
    ...
}

.. important:: At least one of messages or llm_input_messages MUST be provided and will be used as an input to the agent node. The rest of the keys will be added to the graph state.

.. warning:: If you are returning messages in the pre-model hook, you should OVERWRITE the messages key by doing the following:

```python
{
    "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES), *new_messages]
    ...
}
```
None
post_model_hook RunnableLike | None

An optional node to add after the agent node (i.e., the node that calls the LLM). Useful for implementing human-in-the-loop, guardrails, validation, or other post-processing. Post-model hook must be a callable or a runnable that takes in current graph state and returns a state update.

.. note:: Only available with version="v2".

None
state_schema type[StateT] | None

An optional state schema that defines graph state. Must have messages and remaining_steps keys. Defaults to AgentState that defines those two keys.

None
context_schema type[ContextT] | None

An optional schema for runtime context.

None
checkpointer Checkpointer | None

An optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).

None
store BaseStore | None

An optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users).

None
interrupt_before list[str] | None

An optional list of node names to interrupt before. Should be one of the following: "agent", "tools". This is useful if you want to add a user confirmation or other interrupt before taking an action.

None
interrupt_after list[str] | None

An optional list of node names to interrupt after. Should be one of the following: "agent", "tools". This is useful if you want to return directly or run additional processing on an output.

None
debug bool

A flag indicating whether to enable debug mode.

False
version Literal['v1', 'v2']

Determines the version of the graph to create. Can be one of:

  • "v1": The tool node processes a single message. All tool calls in the message are executed in parallel within the tool node.
  • "v2": The tool node processes a tool call. Tool calls are distributed across multiple instances of the tool node using the Send API.
'v2'
name str | None

An optional name for the CompiledStateGraph. This name will be automatically used when adding ReAct agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems.

None

.. warning:: The config_schema parameter is deprecated in v0.6.0 and support will be removed in v2.0.0. Please use context_schema instead to specify the schema for run-scoped context.

Returns:

Type Description
CompiledStateGraph[StateT, ContextT]

A compiled LangChain runnable that can be used for chat interactions.

The "agent" node calls the language model with the messages list (after applying the prompt). If the resulting AIMessage contains tool_calls, the graph will then call the "tools". The "tools" node executes the tools (1 tool per tool_call) and adds the responses to the messages list as ToolMessage objects. The agent node then calls the language model again. The process repeats until no more tool_calls are present in the response. The agent then returns the full list of messages as a dictionary containing the key "messages".

    sequenceDiagram
        participant U as User
        participant A as LLM
        participant T as Tools
        U->>A: Initial input
        Note over A: Prompt + LLM
        loop while tool_calls present
            A->>T: Execute tools
            T-->>A: ToolMessage for each tool_calls
        end
        A->>U: Return final state
Example
from langchain.agents import create_agent

def check_weather(location: str) -> str:
    '''Return the weather forecast for the specified location.'''
    return f"It's always sunny in {location}"

graph = create_agent(
    "anthropic:claude-3-7-sonnet-latest",
    tools=[check_weather],
    prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
    print(chunk)