Tracer that streams run logs to a stream.
LogStreamCallbackHandler(
self,
*,
auto_close: bool = True,
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
_schema_format: Literal['original', 'streaming_events'] = 'streaming_events'
)BaseTracer_StreamingCallbackHandler| Name | Type | Description |
|---|---|---|
auto_close | bool | Default: TrueWhether to close the stream when the root run finishes. |
include_names | Sequence[str] | None | Default: NoneOnly include runs from |
include_types | Sequence[str] | None | Default: NoneOnly include runs from |
include_tags | Sequence[str] | None | Default: NoneOnly include runs from |
exclude_names | Sequence[str] | None | Default: NoneExclude runs from |
exclude_types | Sequence[str] | None | Default: NoneExclude runs from |
exclude_tags | Sequence[str] | None | Default: NoneExclude runs from |
_schema_format | Literal['original', 'streaming_events'] | Default: 'streaming_events'Primarily changes how the inputs and outputs are handled. For internal use only. This API will change.
|
Start a trace for a chat model run.
Start a trace for a (non-chat model) LLM run.
Run on new output token.
Run on retry.
End a trace for a model run.
Handle an error for an LLM run.
Start a trace for a chain run.
End a trace for a chain run.
Handle an error for a chain run.
Start a trace for a tool run.
End a trace for a tool run.
Run when tool errors.
Run when Retriever starts running.
Run when Retriever errors.
Run when Retriever ends running.
Whether to raise an error if an exception occurs.
Whether to ignore LLM callbacks.
Whether to ignore retry callbacks.
Whether to ignore chain callbacks.
Whether to ignore agent callbacks.
Whether to ignore retriever callbacks.
Whether to ignore chat model callbacks.
Ignore custom event.