A callback handler that collects traced runs and makes it easy to fetch the traced run object from calls through any langchain object. For instance, it makes it easy to fetch the run ID and then do things with that, such as log feedback.
class RunCollectorCallbackHandlerThe ID of the example.
An array of traced runs.
A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.
Create and add a run to the run map for chain start events. This must sometimes be done synchronously to avoid race conditions when callbacks are backgrounded, so we expose it as a separate method here.
Create and add a run to the run map for chat model start events. This must sometimes be done synchronously to avoid race conditions when callbacks are backgrounded, so we expose it as a separate method here.
Create and add a run to the run map for LLM start events. This must sometimes be done synchronously to avoid race conditions when callbacks are backgrounded, so we expose it as a separate method here.
Create and add a run to the run map for retriever start events. This must sometimes be done synchronously to avoid race conditions when callbacks are backgrounded, so we expose it as a separate method here.
Create and add a run to the run map for tool start events. This must sometimes be done synchronously to avoid race conditions when callbacks are backgrounded, so we expose it as a separate method here.
Called when an agent is about to execute an action, with the action and the run ID.
Called when an agent finishes execution, before it exits. with the final output and the run ID.
Called at the end of a Chain run, with the outputs and the run ID.
Called if a Chain run encounters an error
Called at the start of a Chain run, with the chain name and inputs and the run ID.
Called at the start of a Chat Model run, with the prompt(s) and the run ID.
Called at the end of an LLM/ChatModel run, with the output and the run ID.
Called if an LLM/ChatModel run encounters an error
Called when an LLM/ChatModel in streaming mode produces a new token
Called at the start of an LLM or Chat Model run, with the prompt(s) and the run ID.
Called at the end of a Tool run, with the tool output and the run ID.
Called if a Tool run encounters an error
Called at the start of a Tool run, with the tool name and input and the run ID.
The name of the serializable. Override to provide an alias or to preserve the serialized module name in minified environments.
Implemented as a static method to support loading logic.
A path to the module that contains the class, eg. ["langchain", "llms"]
Create and add a run to the run map for chain start events.
Create and add a run to the run map for chat model start events.
Create and add a run to the run map for LLM start events.
Create and add a run to the run map for retriever start events.
Create and add a run to the run map for tool start events.
Called when an agent is about to execute an action,
Called when an agent finishes execution, before it exits.
Called at the end of a Chain run, with the outputs and the run ID.
Called if a Chain run encounters an error
Called at the start of a Chain run, with the chain name and inputs
Called at the start of a Chat Model run, with the prompt(s)
Called at the end of an LLM/ChatModel run, with the output and the run ID.
Called if an LLM/ChatModel run encounters an error
Called when an LLM/ChatModel in streaming mode produces a new token
Called at the start of an LLM or Chat Model run, with the prompt(s)
Called at the end of a Tool run, with the tool output and the run ID.
Called if a Tool run encounters an error
Called at the start of a Tool run, with the tool name and input
The name of the serializable. Override to provide an alias or
A path to the module that contains the class, eg. ["langchain", "llms"]
Called when an agent is about to execute an action,
Called when an agent finishes execution, before it exits.
Called at the end of a Chain run, with the outputs and the run ID.
Called if a Chain run encounters an error
Called at the start of a Chain run, with the chain name and inputs
Called at the start of a Chat Model run, with the prompt(s)
Called at the end of an LLM/ChatModel run, with the output and the run ID.
Called if an LLM/ChatModel run encounters an error
Called when an LLM/ChatModel in streaming mode produces a new token
Called at the start of an LLM or Chat Model run, with the prompt(s)
Called at the end of a Tool run, with the tool output and the run ID.
Called if a Tool run encounters an error
Called at the start of a Tool run, with the tool name and input
The name of the serializable. Override to provide an alias or