LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangGraph
  • Web
  • Channels
  • Pregel
  • Prebuilt
  • Remote
LangGraph SDK
  • Client
  • Auth
  • React
  • Logging
  • React Ui
  • Server
LangGraph Checkpoint
LangGraph Checkpoint MongoDB
LangGraph Checkpoint Postgres
  • Store
LangGraph Checkpoint Redis
  • Shallow
  • Store
LangGraph Checkpoint SQLite
LangGraph Checkpoint Validation
  • Cli
LangGraph API
LangGraph CLI
LangGraph CUA
  • Utils
LangGraph Supervisor
LangGraph Swarm
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangGraph
WebChannelsPregelPrebuiltRemote
LangGraph SDK
ClientAuthReactLoggingReact UiServer
LangGraph Checkpoint
LangGraph Checkpoint MongoDB
LangGraph Checkpoint Postgres
Store
LangGraph Checkpoint Redis
ShallowStore
LangGraph Checkpoint SQLite
LangGraph Checkpoint Validation
Cli
LangGraph API
LangGraph CLI
LangGraph CUA
Utils
LangGraph Supervisor
LangGraph Swarm
Language
Theme
JavaScript@langchain/langgraphprebuiltToolNode
Class●Since v0.3

ToolNode

A node that runs the tools requested in the last AIMessage. It can be used either in StateGraph with a "messages" key or in MessageGraph. If multiple tool calls are requested, they will be run in parallel. The output will be a list of ToolMessages, one for each tool call.

Copy
class ToolNode

Bases

RunnableCallable<T, T>

Used in Docs

  • Build a custom RAG agent with LangGraph
  • Build a custom SQL agent
  • How to set up a JavaScript application
  • Tools
  • Trace LangGraph applications

Example 1

Copy
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { AIMessage } from "@langchain/core/messages";

const getWeather = tool((input) => {
  if (["sf", "san francisco"].includes(input.location.toLowerCase())) {
    return "It's 60 degrees and foggy.";
  } else {
    return "It's 90 degrees and sunny.";
  }
}, {
  name: "get_weather",
  description: "Call to get the current weather.",
  schema: z.object({
    location: z.string().describe("Location to get the weather for."),
  }),
});

const tools = [getWeather];
const toolNode = new ToolNode(tools);

const messageWithSingleToolCall = new AIMessage({
  content: "",
  tool_calls: [
    {
      name: "get_weather",
      args: { location: "sf" },
      id: "tool_call_id",
      type: "tool_call",
    }
  ]
})

await toolNode.invoke({ messages: [messageWithSingleToolCall] });
// Returns tool invocation responses as:
// { messages: ToolMessage[] }

Example 2

Copy
import {
  StateGraph,
  MessagesAnnotation,
} from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { ChatAnthropic } from "@langchain/anthropic";

const getWeather = tool((input) => {
  if (["sf", "san francisco"].includes(input.location.toLowerCase())) {
    return "It's 60 degrees and foggy.";
  } else {
    return "It's 90 degrees and sunny.";
  }
}, {
  name: "get_weather",
  description: "Call to get the current weather.",
  schema: z.object({
    location: z.string().describe("Location to get the weather for."),
  }),
});

const tools = [getWeather];
const modelWithTools = new ChatAnthropic({
  model: "claude-3-haiku-20240307",
  temperature: 0
}).bindTools(tools);

const toolNodeForGraph = new ToolNode(tools)

const shouldContinue = (state: typeof MessagesAnnotation.State) => {
  const { messages } = state;
  const lastMessage = messages[messages.length - 1];
  if ("tool_calls" in lastMessage && Array.isArray(lastMessage.tool_calls) && lastMessage.tool_calls?.length) {
    return "tools";
  }
  return "__end__";
}

const callModel = async (state: typeof MessagesAnnotation.State) => {
  const { messages } = state;
  const response = await modelWithTools.invoke(messages);
  return { messages: response };
}

const graph = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addNode("tools", toolNodeForGraph)
  .addEdge("__start__", "agent")
  .addConditionalEdges("agent", shouldContinue)
  .addEdge("tools", "agent")
  .compile();

const inputs = {
  messages: [{ role: "user", content: "what is the weather in SF?" }],
};

const stream = await graph.stream(inputs, {
  streamMode: "values",
});

for await (const { messages } of stream) {
  console.log(messages);
}
// Returns the messages in the state at each step of execution

Constructors

constructor
constructor

Properties

property
config: RunnableConfig<Record<string, any>>

The default configuration for graph execution, can be overridden on a per-invocation basis

property
func: (args: any[]) => any
property
handleToolErrors: boolean
property
lc_kwargs: SerializedFields
property
lc_namespace: string[]

A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.

property
lc_runnable: boolean
property
lc_serializable: boolean
property
name: string

The name of the task, analogous to the node name in StateGraph.

property
recurse: boolean
property
tags: string[]
property
tools: StructuredToolInterface<ToolInputSchemaBase, any, any> | RunnableToolLike<InteropZodType, unknown> | DynamicTool<any>[]
property
trace: boolean
property
lc_aliases
property
lc_attributes
property
lc_id
property
lc_secrets
property
lc_serializable_keys

Methods

method
_batchWithConfig
method
_callWithConfig
method
_getOptionsList
method
_separateRunnableConfigFromCallOptions
method
_streamIterator→ AsyncGenerator<any>

Default streaming implementation. Subclasses should override this method if they support streaming output.

method
_streamLog
method
_tracedInvoke
method
_transformStreamWithConfig
method
assign
method
asTool
method
batch→ Promise<OperationResults<Op>>

Execute multiple operations in a single batch. This is more efficient than executing operations individually.

method
getName
method
invoke→ Promise<ExtractStateType<O, O>>

Run the graph with a single input and config.

method
pick
method
pipe→ PregelNode<RunInput, Exclude<NewRunOutput, Error>>

Create a new runnable sequence that runs each individual runnable in series, piping the output of one runnable into another runnable or runnable-like.

method
run
method
runTool
method
stream→ Promise<IterableReadableStream<StreamOutputMap<TStreamMode, TSubgraphs, ExtractUpdateType<I, ExtractStateType<I, I>>, ExtractStateType<O, O>, "__start__" | N, NodeReturnType, InferWriterType<WriterType>, TEncoding>>>

Streams the execution of the graph, emitting state updates as they occur. This is the primary method for observing graph execution in real-time.

Stream modes:

  • "values": Emits complete state after each step
  • "updates": Emits only state changes after each step
  • "debug": Emits detailed debug information
  • "messages": Emits messages from within nodes
  • "custom": Emits custom events from within nodes
  • "checkpoints": Emits checkpoints from within nodes
  • "tasks": Emits tasks from within nodes
method
streamEvents→ IterableReadableStream<StreamEvent>
method
streamLog
method
toJSON→ __type
method
toJSONNotImplemented
method
transform
method
withConfig→ CompiledStateGraph<S, U, N, I, O, C, NodeReturnType, InterruptType, WriterType>

Creates a new instance of the Pregel graph with updated configuration. This method follows the immutable pattern - instead of modifying the current instance, it returns a new instance with the merged configuration.

method
withFallbacks
method
withListeners
method
withRetry
method
isRunnable
method
lc_name→ string

The name of the serializable. Override to provide an alias or to preserve the serialized module name in minified environments.

Implemented as a static method to support loading logic.

deprecatedmethod
getGraph→ Graph

Returns a drawable representation of the computation graph.

View source on GitHub