Get the compiled StateGraph.
Draw the graph as a Mermaid string.
Optional
params: {Parameters for the drawMermaid method.
Optional
backgroundColor?: stringThe background color of the graph.
Optional
curveStyle?: stringThe style of the graph's curves.
Optional
nodeColors?: Record<string, string>The colors of the graph's nodes.
Optional
withStyles?: booleanWhether to include styles in the graph.
Optional
wrapLabelNWords?: numberThe maximum number of words to wrap in a node's label.
Mermaid string
Visualize the graph as a PNG image.
Optional
params: {Parameters for the drawMermaidPng method.
Optional
backgroundColor?: stringThe background color of the graph.
Optional
curveStyle?: stringThe style of the graph's curves.
Optional
nodeColors?: Record<string, string>The colors of the graph's nodes.
Optional
withStyles?: booleanWhether to include styles in the graph.
Optional
wrapLabelNWords?: numberThe maximum number of words to wrap in a node's label.
PNG image as a buffer
Executes the agent with the given state and returns the final state after all processing.
This method runs the agent's entire workflow synchronously, including:
The initial state for the agent execution. Can be:
messages
array and any middleware-specific state propertiesOptional
config: InvokeConfiguration<Optional runtime configuration including:
The context for the agent execution.
LangGraph configuration options like thread_id
, run_id
, etc.
The store for the agent execution for persisting state, see more in Memory storage.
An optional AbortSignal
for the agent execution.
The recursion limit for the agent execution.
A Promise that resolves to the final agent state after execution completes.
The returned state includes:
- a messages
property containing an array with all messages (input, AI responses, tool calls/results)
- a structuredResponse
property containing the structured response (if configured)
- all state values defined in the middleware
const agent = new ReactAgent({
llm: myModel,
tools: [calculator, webSearch],
responseFormat: z.object({
weather: z.string(),
}),
});
const result = await agent.invoke({
messages: [{ role: "human", content: "What's the weather in Paris?" }]
});
console.log(result.structuredResponse.weather); // outputs: "It's sunny and 75°F."
Executes the agent with streaming, returning an async iterable of events as they occur.
This method runs the agent's workflow similar to invoke
, but instead of waiting for
completion, it streams events in real-time. This allows you to:
The initial state for the agent execution. Can be:
messages
array and any middleware-specific state propertiesOptional
config: StreamConfiguration<Optional runtime configuration including:
The context for the agent execution.
LangGraph configuration options like thread_id
, run_id
, etc.
The store for the agent execution for persisting state, see more in Memory storage.
An optional AbortSignal
for the agent execution.
The streaming mode for the agent execution, see more in Supported stream modes.
The recursion limit for the agent execution.
A Promise that resolves to an IterableReadableStream of events.
Events include:
- on_chat_model_start
: When the LLM begins processing
- on_chat_model_stream
: Streaming tokens from the LLM
- on_chat_model_end
: When the LLM completes
- on_tool_start
: When a tool execution begins
- on_tool_end
: When a tool execution completes
- on_chain_start
: When middleware chains begin
- on_chain_end
: When middleware chains complete
- And other LangGraph v2 stream events
LangChain Agents