Fake chat models for testing purposes.
Async callback manager for LLM run.
Callback manager for LLM run.
Base class for chat models.
Simplified implementation for a chat model to inherit from.
This implementation is primarily here for backwards compatibility. For new
implementations, please use BaseChatModel directly.
Message from an AI.
An AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both the raw output as returned by the model and standardized fields (e.g., tool calls, usage metadata) added by the LangChain framework.
Message chunk from an AI (yielded when streaming).
Base abstract message class.
Messages are the inputs and outputs of a chat model.
Examples include HumanMessage,
AIMessage, and
SystemMessage.
A single chat generation output.
A subclass of Generation that represents the response from a chat model that
generates chat messages.
The message attribute is a structured representation of the chat message. Most of
the time, the message will be of type AIMessage.
Users working with chat models will usually access information via either
AIMessage (returned from runnable interfaces) or LLMResult (available via
callbacks).
ChatGeneration chunk.
ChatGeneration chunks can be concatenated with other ChatGeneration chunks.
Use to represent the result of a chat model call with a single prompt.
This container is used internally by some implementations of chat model, it will
eventually be mapped to a more general LLMResult object, and then projected into
an AIMessage object.
LangChain users working with chat models will usually access information via
AIMessage (returned from runnable interfaces) or LLMResult (available via
callbacks). Please refer the AIMessage and LLMResult schema documentation for
more information.
Configuration for a Runnable.
Custom values
The TypedDict has total=False set intentionally to:
merge_configsvar_child_runnable_config (a ContextVar that automatically passes
config down the call stack without explicit parameter passing), where
configs are merged rather than replaced# Parent sets tags
chain.invoke(input, config={"tags": ["parent"]})
# Child automatically inherits and can add:
# ensure_config({"tags": ["child"]}) -> {"tags": ["parent", "child"]}Fake chat model for testing purposes.
Fake error for testing purposes.
Fake chat model for testing purposes.
Fake Chat Model wrapper for testing purposes.
Generic fake chat model that can be used to test the chat model interface.
on_llm_new_token to allow for testing of callback related code for new
tokens.Generic fake chat model that can be used to test the chat model interface.