stream_v2(
self,
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop| Name | Type | Description |
|---|---|---|
input* | LanguageModelInput | The model input. |
config | RunnableConfig | None | Default: NoneOptional runnable config. |
stop | list[str] | None | Default: None |
**kwargs | Any | Default: {} |
Stream content-block lifecycle events for a single model call.
Returns a ChatModelStream with typed projections
(.text, .reasoning, .tool_calls, .output).
This API is experimental and may change.
ChatModelStream.output.content is always a list of v1
content blocks (text / reasoning / tool_call / image / …),
regardless of the model's output_version attribute. The
setting only affects the legacy stream() / astream() /
invoke() paths. If you're mixing stream_v2 with those
paths in the same pipeline and need a consistent output
shape across them, set output_version="v1" on the model.
Optional list of stop words.
Additional keyword arguments passed to the model.