Implementation of the SharedTracer that POSTS to the LangChain endpoint.
LangChainTracer(
self,
example_id: UUID | str | None = None,
project_name: str | None = None,
client: Client | None = None,
tags: list[str] | None = None,
**kwargs: Any = {}
)| Name | Type | Description |
|---|---|---|
example_id | UUID | str | None | Default: NoneThe example ID. |
project_name | str | None | Default: NoneThe project name. Defaults to the tracer project. |
client | Client | None | Default: NoneThe client. Defaults to the global client. |
tags | list[str] | None | Default: NoneThe tags. Defaults to an empty list. |
**kwargs | Any | Default: {}Additional keyword arguments. |
Start a trace for a (non-chat model) LLM run.
Run on new output token.
Run on retry.
End a trace for a model run.
Handle an error for an LLM run.
Start a trace for a chain run.
End a trace for a chain run.
Handle an error for a chain run.
Start a trace for a tool run.
End a trace for a tool run.
Run when tool errors.
Run when Retriever starts running.
Run when Retriever errors.
Run when Retriever ends running.
Whether to raise an error if an exception occurs.
Whether to ignore LLM callbacks.
Whether to ignore retry callbacks.
Whether to ignore chain callbacks.
Whether to ignore agent callbacks.
Whether to ignore retriever callbacks.
Whether to ignore chat model callbacks.
Ignore custom event.