Creates an agent graph that calls tools in a loop until a stopping condition is met.
This function is deprecated in favor of
create_agent from the langchain
package, which provides an equivalent agent factory with a flexible
middleware system. For migration guidance, see
Migrating from LangGraph v0.
create_react_agent(
model: str | LanguageModelLike | Callable[[StateSchema, Runtime[ContextT]], BaseChatModel] | Callable[[StateSchema, Runtime[ContextT]], Awaitable[BaseChatModel]] | Callable[[StateSchema, Runtime[ContextT]], Runnable[LanguageModelInput, BaseMessage]] | Callable[[StateSchema, Runtime[ContextT]], Awaitable[Runnable[LanguageModelInput, BaseMessage]]],
tools: Sequence[BaseTool | Callable | dict[str, Any]] | ToolNode,
*,
prompt: Prompt | None = None,
response_format: StructuredResponseSchema | tuple[str, StructuredResponseSchema] | None = None,
pre_model_hook: RunnableLike | None = None,
post_model_hook: RunnableLike | None = None,
state_schema: StateSchemaType | None = None,
context_schema: type[Any] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
version: Literal['v1', 'v2'] = 'v2',
name: str | None = None,
**deprecated_kwargs: Any = {}
) -> CompiledStateGraphThe config_schema parameter is deprecated in v0.6.0 and support will be removed in v2.0.0.
Please use context_schema instead to specify the schema for run-scoped context.
The "agent" node calls the language model with the messages list (after applying the prompt).
If the resulting AIMessage contains tool_calls, the graph will then call the "tools".
The "tools" node executes the tools (1 tool per tool_call) and adds the responses to the messages list
as ToolMessage objects. The agent node then calls the language model again.
The process repeats until no more tool_calls are present in the response.
The agent then returns the full list of messages as a dictionary containing the key 'messages'.
sequenceDiagram
participant U as User
participant A as LLM
participant T as Tools
U->>A: Initial input
Note over A: Prompt + LLM
loop while tool_calls present
A->>T: Execute tools
T-->>A: ToolMessage for each tool_calls
end
A->>U: Return final state
Example:
from langgraph.prebuilt import create_react_agent
def check_weather(location: str) -> str:
'''Return the weather forecast for the specified location.'''
return f"It's always sunny in {location}"
graph = create_react_agent(
"anthropic:claude-3-7-sonnet-latest",
tools=[check_weather],
prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)| Name | Type | Description |
|---|---|---|
model* | str | LanguageModelLike | Callable[[StateSchema, Runtime[ContextT]], BaseChatModel] | Callable[[StateSchema, Runtime[ContextT]], Awaitable[BaseChatModel]] | Callable[[StateSchema, Runtime[ContextT]], Runnable[LanguageModelInput, BaseMessage]] | Callable[[StateSchema, Runtime[ContextT]], Awaitable[Runnable[LanguageModelInput, BaseMessage]]] | The language model for the agent. Supports static and dynamic model selection.
Dynamic functions receive graph state and runtime, enabling
context-dependent model selection. Must return a Dynamic model Dynamic Model Requirements Ensure returned models have appropriate tools bound via
|
tools* | Sequence[BaseTool | Callable | dict[str, Any]] | ToolNode | A list of tools or a |
prompt | Prompt | None | Default: NoneAn optional prompt for the LLM. Can take a few different forms:
|
response_format | StructuredResponseSchema | tuple[str, StructuredResponseSchema] | None | Default: NoneAn optional schema for the final agent output. If provided, output will be formatted to match the given schema and returned in the 'structured_response' state key. If not provided, Can be passed in as:
Important
Note The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide. |
pre_model_hook | RunnableLike | None | Default: NoneAn optional node to add before the
Important At least one of Warning If you are returning |
post_model_hook | RunnableLike | None | Default: NoneAn optional node to add after the Note Only available with |
state_schema | StateSchemaType | None | Default: NoneAn optional state schema that defines graph state.
Must have Note
|
context_schema | type[Any] | None | Default: NoneAn optional schema for runtime context. |
checkpointer | Checkpointer | None | Default: NoneAn optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation). |
store | BaseStore | None | Default: NoneAn optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users). |
interrupt_before | list[str] | None | Default: NoneAn optional list of node names to interrupt before.
Should be one of the following: This is useful if you want to add a user confirmation or other interrupt before taking an action. |
interrupt_after | list[str] | None | Default: NoneAn optional list of node names to interrupt after.
Should be one of the following: This is useful if you want to return directly or run additional processing on an output. |
debug | bool | Default: FalseA flag indicating whether to enable debug mode. |
version | Literal['v1', 'v2'] | Default: 'v2'Determines the version of the graph to create. Can be one of:
|
name | str | None | Default: NoneAn optional name for the |