Callback handlers allow listening to events in LangChain.
Class hierarchy:
.. code-block::
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Get the OpenAI callback handler in a context manager. which conveniently exposes token and cost information.
Get the WandbTracer in a context manager.
Callback Handler that writes to a Streamlit app.
This CallbackHandler is geared towards use with a LangChain Agent; it displays the Agent's LLM and tool-usage "thoughts" inside a series of Streamlit expanders.
parent_container
The st.container that will contain all the Streamlit elements that the
Handler creates.
max_thought_containers
The max number of completed LLM thought containers to show at once. When this
threshold is reached, a new thought will cause the oldest thoughts to be
collapsed into a "History" expander. Defaults to 4.
expand_new_thoughts
Each LLM "thought" gets its own st.expander. This param controls whether that
expander is expanded by default. Defaults to True.
collapse_completed_thoughts
If True, LLM thought expanders will be collapsed when completed.
Defaults to True.
thought_labeler
An optional custom LLMThoughtLabeler instance. If unspecified, the handler
will use the default thought labeling logic. Defaults to None.
A new StreamlitCallbackHandler instance.
Note that this is an "auto-updating" API: if the installed version of Streamlit has a more recent StreamlitCallbackHandler implementation, an instance of that class will be used.
Callback Handler that logs to Aim.
Callback Handler that logs into Argilla.
Callback Handler that logs to Arize.
Callback Handler that logs to Arthur platform.
Arthur helps enterprise teams optimize model operations and performance at scale. The Arthur API tracks model performance, explainability, and fairness across tabular, NLP, and CV models. Our API is model- and platform-agnostic, and continuously scales with complex and dynamic enterprise needs. To learn more about Arthur, visit our website at https://www.arthur.ai/ or read the Arthur docs at https://docs.arthur.ai/
Callback Handler that logs to ClearML.
Callback Handler that logs to Comet.
Callback Handler that records transcripts to the Context service.
Callback handler that is used within a Flyte task.
Callback for manually validating values.
Callback Handler that logs to Infino.
Label Studio callback handler. Provides the ability to send predictions to Label Studio for human evaluation, feedback and annotation.
Callback Handler for LLMonitor`.
- `app_id`: The app id of the app you want to report to. Defaults to
`None`, which means that `LLMONITOR_APP_ID` will be used.
- `api_url`: The url of the LLMonitor API. Defaults to `None`,
which means that either `LLMONITOR_API_URL` environment variable
or `https://app.llmonitor.com` will be used.
- `ValueError`: if `app_id` is not provided either as an
argument or as an environment variable.
- `ConnectionError`: if the connection to the API fails.
from langchain_community.llms import OpenAI
from langchain_community.callbacks import LLMonitorCallbackHandler
llmonitor_callback = LLMonitorCallbackHandler()
llm = OpenAI(callbacks=[llmonitor_callback],
metadata={"userId": "user-123"})
llm.invoke("Hello, how are you?")Callback Handler that logs metrics and artifacts to mlflow server.
Callback Handler that tracks OpenAI info.
Callback handler for promptlayer.
Callback Handler that logs prompt artifacts and metrics to SageMaker Experiments.
Generates markdown labels for LLMThought containers. Pass a custom subclass of this to StreamlitCallbackHandler to override its default labeling logic.
Callback handler for Trubrics.
Upstash Ratelimit Error
Raised when the rate limit is reached in UpstashRatelimitHandler
Callback to handle rate limiting based on the number of requests or the number of tokens in the input.
It uses Upstash Ratelimit to track the ratelimit which utilizes Upstash Redis to track the state.
Should not be passed to the chain when initialising the chain. This is because the handler has a state which should be fresh every time invoke is called. Instead, initialise and pass a handler every time you invoke.
Callback Handler that logs evaluation results to uptrain and the console.
Callback Handler that logs to Weights and Biases.
Callback Handler for logging to WhyLabs. This callback handler utilizes
langkit to extract features from the prompts & responses when interacting with
an LLM. These features can be used to guardrail, evaluate, and observe interactions
over time to detect issues relating to hallucinations, prompt engineering,
or output validation. LangKit is an LLM monitoring toolkit developed by WhyLabs.
Here are some examples of what can be monitored with LangKit:
For more information, see https://docs.whylabs.ai/docs/language-model-monitoring or check out the LangKit repo here: https://github.com/whylabs/langkit
Args: api_key (Optional[str]): WhyLabs API key. Optional because the preferred way to specify the API key is with environment variable WHYLABS_API_KEY. org_id (Optional[str]): WhyLabs organization id to write profiles to. Optional because the preferred way to specify the organization id is with environment variable WHYLABS_DEFAULT_ORG_ID. dataset_id (Optional[str]): WhyLabs dataset id to write profiles to. Optional because the preferred way to specify the dataset id is with environment variable WHYLABS_DEFAULT_DATASET_ID. sentiment (bool): Whether to enable sentiment analysis. Defaults to False. toxicity (bool): Whether to enable toxicity analysis. Defaults to False. themes (bool): Whether to enable theme analysis. Defaults to False.
Callback handler for promptlayer.
Ratelimiting Handler to limit requests or tokens
FlyteKit callback handler.
Callback handler for Context AI
Callback Handler that prints to std out.
UpTrain Callback Handler
UpTrain is an open-source platform to evaluate and improve LLM applications. It provides grades for 20+ preconfigured checks (covering language, code, embedding use cases), performs root cause analyses on instances of failure cases and provides guidance for resolving them.
This module contains a callback handler for integrating UpTrain seamlessly into your pipeline and facilitating diverse evaluations. The callback handler automates various evaluations to assess the performance and effectiveness of the components within the pipeline.
The evaluations conducted include:
RAG:
Multi Query Generation: MultiQueryRetriever generates multiple variants of a question with similar meanings to the original question. This evaluation includes previous assessments and adds:
Context Compression and Reranking: Re-ranking involves reordering nodes based on relevance to the query and selecting top n nodes. Due to the potential reduction in the number of nodes after re-ranking, the following evaluations are performed in addition to the RAG evaluations:
These evaluations collectively ensure the robustness and effectiveness of the RAG query engine, MultiQueryRetriever, and the re-ranking process within the pipeline.
Useful links: Github: https://github.com/uptrain-ai/uptrain Website: https://uptrain.ai/ Docs: https://docs.uptrain.ai/getting-started/introduction
ArthurAI's Callback Handler.
Tracers that record execution of LangChain runs.