LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelsbaseBaseLanguageModel
    Class●Since v0.1

    BaseLanguageModel

    Abstract base class for interfacing with language models.

    All language model wrappers inherited from BaseLanguageModel.

    Copy
    BaseLanguageModel(
        self,
        *args: Any = (),
        **kwargs: Any = {},
    )

    Bases

    RunnableSerializable[LanguageModelInput, LanguageModelOutputVar]ABC

    Attributes

    attribute
    cache: BaseCache | bool | None

    Whether to cache the response.

    • If True, will use the global cache.
    • If False, will not use a cache
    • If None, will use the global cache if it's set, otherwise no cache.
    • If instance of BaseCache, will use the provided cache.

    Caching is not currently supported for streaming methods of models.

    attribute
    verbose: bool

    Whether to print out response text.

    attribute
    callbacks: Callbacks

    Callbacks to add to the run trace.

    attribute
    tags: list[str] | None

    Tags to add to the run trace.

    attribute
    metadata: dict[str, Any] | None

    Metadata to add to the run trace.

    attribute
    custom_get_token_ids: Callable[[str], list[int]] | None

    Optional encoder to use for counting tokens.

    attribute
    model_config
    attribute
    InputType: TypeAlias

    Get the input type for this Runnable.

    Methods

    method
    set_verbose

    If verbose is None, set it.

    This allows users to pass in None as verbose to access the global setting.

    method
    generate_prompt

    Pass a sequence of prompts to the model and return model generations.

    This method should make use of batched calls for models that expose a batched API.

    Use this method when you want to:

    1. Take advantage of batched calls,
    2. Need more output from the model than just the top generated value,
    3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
    method
    agenerate_prompt

    Asynchronously pass a sequence of prompts and return model generations.

    This method should make use of batched calls for models that expose a batched API.

    Use this method when you want to:

    1. Take advantage of batched calls,
    2. Need more output from the model than just the top generated value,
    3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
    method
    with_structured_output

    Not implemented on this class.

    method
    get_token_ids

    Return the ordered IDs of the tokens in a text.

    method
    get_num_tokens

    Get the number of tokens present in the text.

    Useful for checking if an input fits in a model's context window.

    This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.

    method
    get_num_tokens_from_messages

    Get the number of tokens in the messages.

    Useful for checking if an input fits in a model's context window.

    This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.

    Note
    • The base implementation of get_num_tokens_from_messages ignores tool schemas.
    • The base implementation of get_num_tokens_from_messages adds additional prefixes to messages in represent user roles, which will add to the overall token count. Model-specific implementations may choose to handle this differently.

    Inherited fromRunnableSerializable

    Attributes

    Aname: str
    —

    The name of the function.

    Methods

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mconfigurable_fieldsMconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Methods

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_lc_namespace
    —

    Get the namespace of the LangChain object.

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str
    —

    The name of the function.

    AOutputType: AnyAinput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Aconfig_specs: list[ConfigurableFieldSpec]

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_schemaMget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graphMget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpipe
    —

    Pipe Runnable objects.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Merge the Dict input with the output produced by the mapping argument.

    Minvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    MbatchMbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    MabatchMabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    MstreamMastreamMastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    MtransformMatransformMbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_configMwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Map a function to multiple iterables.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub