LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_models
    Module●Since v0.1

    language_models

    Core language model abstractions.

    LangChain has two main classes to work with language models: chat models and "old-fashioned" LLMs (string-in, string-out).

    Chat models

    Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).

    Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.

    The key abstraction for chat models is BaseChatModel. Implementations should inherit from this class.

    See existing chat model integrations.

    LLMs (legacy)

    Language models that takes a string as input and returns a string.

    These are traditionally older models (newer models generally are chat models).

    Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This gives them the same interface as chat models. When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.

    Attributes

    attribute
    LanguageModelLike: Runnable[LanguageModelInput, LanguageModelOutput]

    Input/output interface for a language model.

    attribute
    ModelProfileRegistry: dict[str, ModelProfile]

    Registry mapping model identifiers or names to their ModelProfile.

    Functions

    function
    import_attr

    Import an attribute from a module located in a package.

    This utility function is used in custom __getattr__ methods within __init__.py files to dynamically import attributes.

    function
    is_openai_data_block

    Check whether a block contains multimodal data in OpenAI Chat Completions format.

    Supports both data and ID-style blocks (e.g. 'file_data' and 'file_id')

    If additional keys are present, they are ignored / will not affect outcome as long as the required keys are present and valid.

    function
    get_tokenizer

    Get a GPT-2 tokenizer instance.

    This function is cached to avoid re-loading the tokenizer every time it is called.

    Classes

    class
    BaseLanguageModel

    Abstract base class for interfacing with language models.

    All language model wrappers inherited from BaseLanguageModel.

    class
    LangSmithParams

    LangSmith parameters for tracing.

    class
    BaseChatModel

    Base class for chat models.

    class
    SimpleChatModel

    Simplified implementation for a chat model to inherit from.

    Note

    This implementation is primarily here for backwards compatibility. For new implementations, please use BaseChatModel directly.

    class
    FakeListLLM

    Fake LLM for testing purposes.

    class
    FakeStreamingListLLM

    Fake streaming list LLM for testing purposes.

    An LLM that will return responses from a list in order.

    This model also supports optionally sleeping between successive chunks in a streaming implementation.

    class
    FakeListChatModel

    Fake chat model for testing purposes.

    class
    FakeMessagesListChatModel

    Fake chat model for testing purposes.

    class
    GenericFakeChatModel

    Generic fake chat model that can be used to test the chat model interface.

    • Chat model should be usable in both sync and async tests
    • Invokes on_llm_new_token to allow for testing of callback related code for new tokens.
    • Includes logic to break messages into message chunk to facilitate testing of streaming.
    class
    ParrotFakeChatModel

    Generic fake chat model that can be used to test the chat model interface.

    • Chat model should be usable in both sync and async tests
    class
    LLM

    Simple interface for implementing a custom LLM.

    You should subclass this class and implement the following:

    • _call method: Run the LLM on the given prompt and input (used by invoke).
    • _identifying_params property: Return a dictionary of the identifying parameters This is critical for caching and tracing purposes. Identifying parameters is a dict that identifies the LLM. It should mostly include a model_name.

    Optional: Override the following methods to provide more optimizations:

    • _acall: Provide a native async version of the _call method. If not provided, will delegate to the synchronous version using run_in_executor. (Used by ainvoke).
    • _stream: Stream the LLM on the given prompt and input. stream will use _stream if provided, otherwise it use _call and output will arrive in one chunk.
    • _astream: Override to provide a native async version of the _stream method. astream will use _astream if provided, otherwise it will implement a fallback behavior that will use _stream if _stream is implemented, and use _acall if _stream is not implemented.
    class
    BaseLLM

    Base LLM abstract interface.

    It should take in a prompt and return a string.

    class
    ModelProfile

    Model profile.

    Beta feature

    This is a beta feature. The format of model profiles is subject to change.

    Provides information about chat model capabilities, such as context window sizes and supported features.

    Type Aliases

    typeAlias
    LanguageModelInput

    Input to a language model.

    typeAlias
    LanguageModelOutput

    Output from a language model.

    Modules

    module
    fake_chat_models

    Fake chat models for testing purposes.

    module
    llms

    Base interface for traditional large language models (LLMs) to expose.

    These are traditionally older models (newer models generally are chat models).

    module
    fake

    Fake LLMs for testing purposes.

    module
    chat_models

    Chat models for conversational AI.

    module
    model_profile

    Model profile types and utilities.

    module
    base

    Base language models class.

    View source on GitHub