LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelsbaseBaseLanguageModelget_num_tokens_from_messages
    Method●Since v0.1

    get_num_tokens_from_messages

    Get the number of tokens in the messages.

    Useful for checking if an input fits in a model's context window.

    This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.

    Note
    • The base implementation of get_num_tokens_from_messages ignores tool schemas.
    • The base implementation of get_num_tokens_from_messages adds additional prefixes to messages in represent user roles, which will add to the overall token count. Model-specific implementations may choose to handle this differently.
    Copy
    get_num_tokens_from_messages(
      self,
      messages: list[BaseMessage],
      tools: Sequence | None = None
    ) -> int

    Parameters

    NameTypeDescription
    messages*list[BaseMessage]

    The message inputs to tokenize.

    toolsSequence | None
    Default:None

    If provided, sequence of dict, BaseModel, function, or BaseTool objects to be converted to tool schemas.

    View source on GitHub