LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Browser
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
  • Tools
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Structured Output
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Testing
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Standard Schema
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
BrowserUniversalHubNodeLoadSerializableEncoder BackedFile SystemIn MemoryTools
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileStructured OutputLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryTestingToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStandard SchemaStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainindexReactAgentstream
Methodā—Since v1.0

stream

Executes the agent with streaming, returning an async iterable of state updates as they occur.

This method runs the agent's workflow similar to invoke, but instead of waiting for completion, it streams high-level state updates in real-time. This allows you to:

  • Display intermediate results to users as they're generated
  • Monitor the agent's progress through each step
  • React to state changes as nodes complete

For more granular event-level streaming (like individual LLM tokens), use streamEvents instead.

Copy
stream<
  TStreamMode extends StreamMode | StreamMode[] | undefined,
  TSubgraphs extends boolean,
  TEncoding extends "text/event-stream" | undefined
>(
  state: InvokeStateParameter<Types>,
  config: StreamConfiguration<InferContextInput<Types["Context"] extends InteropZodObject | AnyAnnotationRoot  any[any] : AnyAnnotationRoot>  InferMiddlewareContextInputs<Types["Middleware"]>, TStreamMode, TSubgraphs, TEncoding>
): Promise<IterableReadableStream<StreamOutputMap<TStreamMode, TSubgraphs, MergedAgentState<Types>, MergedAgentState<Types>, string, unknown, unknown, TEncoding>>>

Parameters

NameTypeDescription
state*InvokeStateParameter<Types>

The initial state for the agent execution. Can be:

  • An object containing messages array and any middleware-specific state properties
  • A Command object for more advanced control flow
configStreamConfiguration<InferContextInput<Types["Context"] extends InteropZodObject | AnyAnnotationRoot ? any[any] : AnyAnnotationRoot> & InferMiddlewareContextInputs<Types["Middleware"]>, TStreamMode, TSubgraphs, TEncoding>

Optional runtime configuration including:

Example

Copy
const agent = new ReactAgent({
  llm: myModel,
  tools: [calculator, webSearch]
});

const stream = await agent.stream({
  messages: [{ role: "human", content: "What's 2+2 and the weather in NYC?" }]
});

for await (const chunk of stream) {
  console.log(chunk); // State update from each node
}
View source on GitHub