LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainindexToolRuntime
Typeā—Since v1.1

ToolRuntime

Runtime context automatically injected into tools.

When a tool function has a parameter named tool_runtime with type hint ToolRuntime, the tool execution system will automatically inject an instance containing:

  • state: The current graph state
  • toolCallId: The ID of the current tool call
  • config: RunnableConfig for the current execution
  • context: Runtime context
  • store: BaseStore instance for persistent storage
  • writer: Stream writer for streaming output

No Annotated wrapper is needed - just use runtime: ToolRuntime as a parameter.

Copy
ToolRuntime: RunnableConfig  __type

Used in Docs

  • Handoffs
  • Tools

Example

Copy
import { tool, type ToolRuntime } from "@langchain/core/tools";
import { z } from "zod";

const stateSchema = z.object({
  messages: z.array(z.any()),
  userId: z.string().optional(),
});

const greet = tool(
  async ({ name }, runtime: ToolRuntime<typeof stateSchema>) => {
    // Access state
    const messages = runtime.state.messages;

    // Access tool_call_id
    console.log(`Tool call ID: ${runtime.toolCallId}`);

    // Access config
    console.log(`Run ID: ${runtime.config.runId}`);

    // Access runtime context
    const userId = runtime.context?.userId;

    // Access store
    await runtime.store?.mset([["key", "value"]]);

    // Stream output
    runtime.writer?.("Processing...");

    return `Hello! User ID: ${runtime.state.userId || "unknown"} ${name}`;
  },
  {
    name: "greet",
    description: "Use this to greet the user once you found their info.",
    schema: z.object({ name: z.string() }),
    stateSchema,
  }
);

const agent = createAgent({
  model,
  tools: [greet],
  stateSchema,
  contextSchema,
});
View source on GitHub