Executes the agent with streaming, returning an async iterable of state updates as they occur.
This method runs the agent's workflow similar to invoke, but instead of waiting for
completion, it streams high-level state updates in real-time. This allows you to:
For more granular event-level streaming (like individual LLM tokens), use streamEvents instead.
stream<
TStreamMode extends StreamMode | StreamMode[] | undefined,
TSubgraphs extends boolean,
TEncoding extends "text/event-stream" | undefined
>(
state: InvokeStateParameter<Types>,
config: StreamConfiguration<InferContextInput<Types["Context"] extends InteropZodObject | AnyAnnotationRoot any[any] : AnyAnnotationRoot> InferMiddlewareContextInputs<Types["Middleware"]>, TStreamMode, TSubgraphs, TEncoding>
): Promise<IterableReadableStream<StreamOutputMap<TStreamMode, TSubgraphs, MergedAgentState<Types>, MergedAgentState<Types>, string, unknown, unknown, TEncoding>>>| Name | Type | Description |
|---|---|---|
state* | InvokeStateParameter<Types> | The initial state for the agent execution. Can be:
|
config | StreamConfiguration<InferContextInput<Types["Context"] extends InteropZodObject | AnyAnnotationRoot ? any[any] : AnyAnnotationRoot> & InferMiddlewareContextInputs<Types["Middleware"]>, TStreamMode, TSubgraphs, TEncoding> | Optional runtime configuration including: |
const agent = new ReactAgent({
llm: myModel,
tools: [calculator, webSearch]
});
const stream = await agent.stream({
messages: [{ role: "human", content: "What's 2+2 and the weather in NYC?" }]
});
for await (const chunk of stream) {
console.log(chunk); // State update from each node
}