LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelsllms
    Module●Since v0.1

    llms

    Base interface for traditional large language models (LLMs) to expose.

    These are traditionally older models (newer models generally are chat models).

    Attributes

    attribute
    logger

    Functions

    function
    get_llm_cache

    Get the value of the llm_cache global setting.

    function
    dumpd

    Return a dict representation of an object.

    function
    convert_to_messages

    Convert a sequence of messages to a list of messages.

    function
    ensure_config

    Ensure that a config is a dict with all keys present.

    function
    get_config_list

    Get a list of configs from a single config or a list of configs.

    It is useful for subclasses overriding batch() or abatch().

    function
    run_in_executor

    Run a function in an executor.

    function
    create_base_retry_decorator

    Create a retry decorator for a given LLM and provided a list of error types.

    function
    get_prompts

    Get prompts that are already cached.

    function
    aget_prompts

    Get prompts that are already cached. Async version.

    function
    update_cache

    Update the cache and get the LLM output.

    function
    aupdate_cache

    Update the cache and get the LLM output. Async version.

    Classes

    class
    BaseCache

    Interface for a caching layer for LLMs and Chat models.

    The cache interface consists of the following methods:

    • lookup: Look up a value based on a prompt and llm_string.
    • update: Update the cache based on a prompt and llm_string.
    • clear: Clear the cache.

    In addition, the cache interface provides an async version of each method.

    The default implementation of the async methods is to run the synchronous method in an executor. It's recommended to override the async methods and provide async implementations to avoid unnecessary overhead.

    class
    AsyncCallbackManager

    Async callback manager that handles callbacks from LangChain.

    class
    AsyncCallbackManagerForLLMRun

    Async callback manager for LLM run.

    class
    BaseCallbackManager

    Base callback manager.

    class
    CallbackManager

    Callback manager for LangChain.

    class
    CallbackManagerForLLMRun

    Callback manager for LLM run.

    class
    BaseLanguageModel

    Abstract base class for interfacing with language models.

    All language model wrappers inherited from BaseLanguageModel.

    class
    LangSmithParams

    LangSmith parameters for tracing.

    class
    Generation

    A single text generation output.

    Generation represents the response from an "old-fashioned" LLM (string-in, string-out) that generates regular text (not chat messages).

    This model is used internally by chat model and will eventually be mapped to a more general LLMResult object, and then projected into an AIMessage object.

    LangChain users working with chat models will usually access information via AIMessage (returned from runnable interfaces) or LLMResult (available via callbacks). Please refer to AIMessage and LLMResult for more information.

    class
    GenerationChunk

    GenerationChunk, which can be concatenated with other Generation chunks.

    class
    LLMResult

    A container for results of an LLM call.

    Both chat models and LLMs generate an LLMResult object. This object contains the generated outputs and any additional information that the model provider wants to return.

    class
    RunInfo

    Class that contains metadata for a single execution of a chain or model.

    Defined for backwards compatibility with older versions of langchain_core.

    Users can acquire the run_id information from callbacks or via run_id information present in the astream_event API (depending on the use case).

    class
    ChatPromptValue

    Chat prompt value.

    A type of a prompt value that is built from messages.

    class
    PromptValue

    Base abstract class for inputs to any language model.

    PromptValues can be converted to both LLM (pure text-generation) inputs and chat model inputs.

    class
    StringPromptValue

    String prompt value.

    class
    RunnableConfig

    Configuration for a Runnable.

    Note

    Custom values

    The TypedDict has total=False set intentionally to:

    • Allow partial configs to be created and merged together via merge_configs
    • Support config propagation from parent to child runnables via var_child_runnable_config (a ContextVar that automatically passes config down the call stack without explicit parameter passing), where configs are merged rather than replaced
    Example
    # Parent sets tags
    chain.invoke(input, config={"tags": ["parent"]})
    # Child automatically inherits and can add:
    # ensure_config({"tags": ["child"]}) -> {"tags": ["parent", "child"]}
    class
    BaseLLM

    Base LLM abstract interface.

    It should take in a prompt and return a string.

    class
    LLM

    Simple interface for implementing a custom LLM.

    You should subclass this class and implement the following:

    • _call method: Run the LLM on the given prompt and input (used by invoke).
    • _identifying_params property: Return a dictionary of the identifying parameters This is critical for caching and tracing purposes. Identifying parameters is a dict that identifies the LLM. It should mostly include a model_name.

    Optional: Override the following methods to provide more optimizations:

    • _acall: Provide a native async version of the _call method. If not provided, will delegate to the synchronous version using run_in_executor. (Used by ainvoke).
    • _stream: Stream the LLM on the given prompt and input. stream will use _stream if provided, otherwise it use _call and output will arrive in one chunk.
    • _astream: Override to provide a native async version of the _stream method. astream will use _astream if provided, otherwise it will implement a fallback behavior that will use _stream if _stream is implemented, and use _acall if _stream is not implemented.

    Type Aliases

    typeAlias
    Callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None
    typeAlias
    LanguageModelInput

    Input to a language model.

    View source on GitHub