stream_events(
self,
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
| Name | Type | Description |
|---|---|---|
input* | LanguageModelInput | The model input. |
config | RunnableConfig | None | Default: NoneOptional runnable config. |
version | Literal['v1', 'v2', 'v3'] | Default: 'v2'Streaming-event schema version. |
stop | list[str] | None | Default: None |
**kwargs | Any | Default: {} |
Stream events from this chat model.
For version="v1" / "v2", yields StreamEvent dicts (see
Runnable.stream_events). For version="v3", returns a
ChatModelStream exposing typed projections (.text,
.reasoning, .tool_calls, .output).
version="v3" is in beta. The protocol shape, return type,
and surface area may change in future releases. Calling it
emits a LangChainBetaWarning at runtime.
ChatModelStream.output.content is always a list of v1
content blocks (text / reasoning / tool_call / image / ā¦),
regardless of the model's output_version attribute. The
setting only affects the legacy stream() / astream() /
invoke() paths. If you're mixing
stream_events(version="v3") with those paths in the same
pipeline and need a consistent output shape across them,
set output_version="v1" on the model.
Optional stop sequences. Only used for version="v3";
ignored otherwise.
Additional keyword arguments. For version="v3",
forwarded to the model.