LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangGraph
  • Web
  • Channels
  • Pregel
  • Prebuilt
  • Remote
LangGraph SDK
  • Client
  • Auth
  • React
  • Logging
  • React Ui
  • Server
LangGraph Checkpoint
LangGraph Checkpoint MongoDB
LangGraph Checkpoint Postgres
  • Store
LangGraph Checkpoint Redis
  • Shallow
  • Store
LangGraph Checkpoint SQLite
LangGraph Checkpoint Validation
  • Cli
LangGraph API
LangGraph CLI
LangGraph CUA
  • Utils
LangGraph Supervisor
LangGraph Swarm
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangGraph
WebChannelsPregelPrebuiltRemote
LangGraph SDK
ClientAuthReactLoggingReact UiServer
LangGraph Checkpoint
LangGraph Checkpoint MongoDB
LangGraph Checkpoint Postgres
Store
LangGraph Checkpoint Redis
ShallowStore
LangGraph Checkpoint SQLite
LangGraph Checkpoint Validation
Cli
LangGraph API
LangGraph CLI
LangGraph CUA
Utils
LangGraph Supervisor
LangGraph Swarm
Language
Theme
JavaScript@langchain/langgraphprebuiltCreateReactAgentParams
Type●Since v0.3Deprecated

CreateReactAgentParams

Copy
CreateReactAgentParams

Properties

property
checkpointer: BaseCheckpointSaver | boolean

Optional checkpointer for persisting graph state. When provided, saves a checkpoint of the graph state at every superstep. When false or undefined, checkpointing is disabled, and the graph will not be able to save or restore state.

property
checkpointSaver: BaseCheckpointSaver | boolean

An optional checkpoint saver to persist the agent's state.

property
contextSchema: C

An optional schema for the context.

property
description: string

The description of the compiled graph. This is used by the supervisor agent to describe the handoff to the agent.

property
includeAgentName: "inline"

Use to specify how to expose the agent name to the underlying supervisor LLM.

  • undefined: Relies on the LLM provider AIMessage#name. Currently, only OpenAI supports this.
  • "inline": Add the agent name directly into the content field of the AIMessage using XML-style tags. Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>"
property
interruptAfter: N[] | All

Optional array of node names or "all" to interrupt after executing these nodes. Used for implementing human-in-the-loop workflows.

property
interruptBefore: N[] | All

Optional array of node names or "all" to interrupt before executing these nodes. Used for implementing human-in-the-loop workflows.

property
llm: LanguageModelLike | (state: ToAnnotationRoot<A>["State"] & PreHookAnnotation["State"], runtime: Runtime<ToAnnotationRoot<C>["State"]>) => Promise<LanguageModelLike> | LanguageModelLike

The chat model that can utilize OpenAI-style tool calling.

property
name: string

The name of the task, analogous to the node name in StateGraph.

property
postModelHook: RunnableLike<ToAnnotationRoot<A>["State"], ToAnnotationRoot<A>["Update"], LangGraphRunnableConfig>

An optional node to add after the agent node (i.e., the node that calls the LLM). Useful for implementing human-in-the-loop, guardrails, validation, or other post-processing.

property
preModelHook: RunnableLike<ToAnnotationRoot<A>["State"] & PreHookAnnotation["State"], ToAnnotationRoot<A>["Update"] & PreHookAnnotation["Update"], LangGraphRunnableConfig>

An optional node to add before the agent node (i.e., the node that calls the LLM). Useful for managing long message histories (e.g., message trimming, summarization, etc.).

property
prompt: Prompt

An optional prompt for the LLM. This takes full graph state BEFORE the LLM is called and prepares the input to LLM.

Can take a few different forms:

  • str: This is converted to a SystemMessage and added to the beginning of the list of messages in state["messages"].
  • SystemMessage: this is added to the beginning of the list of messages in state["messages"].
  • Function: This function should take in full graph state and the output is then passed to the language model.
  • Runnable: This runnable should take in full graph state and the output is then passed to the language model.

Note: Prior to v0.2.46, the prompt was set using stateModifier / messagesModifier parameters. This is now deprecated and will be removed in a future release.

property
responseFormat: InteropZodType<StructuredResponseType> | StructuredResponseSchemaOptions<StructuredResponseType> | Record<string, any>

An optional schema for the final agent output.

If provided, output will be formatted to match the given schema and returned in the 'structuredResponse' state key. If not provided, structuredResponse will not be present in the output state.

Can be passed in as:

  • Zod schema
  • JSON schema
  • { prompt, schema }, where schema is one of the above. The prompt will be used together with the model that is being used to generate the structured response.

Important: responseFormat requires the model to support .withStructuredOutput().

Note: The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide.

property
store: BaseStore

Optional long-term memory store for the graph, allows for persistence & retrieval of data across threads

property
tools: ToolNode | ServerTool | ClientTool[]
property
version: "v1" | "v2"

Determines the version of the graph to create.

Can be one of

  • "v1": The tool node processes a single message. All tool calls in the message are executed in parallel within the tool node.
  • "v2": The tool node processes a single tool call. Tool calls are distributed across multiple instances of the tool node using the Send API.
deprecatedproperty
messageModifier: MessageModifier
deprecatedproperty
stateModifier: StateModifier
deprecatedproperty
stateSchema: A
View source on GitHub