LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coremessagesaiInputTokenDetails
    Class●Since v0.3

    InputTokenDetails

    Breakdown of input token counts.

    Does not need to sum to full input token count. Does not need to have all keys.

    Copy
    InputTokenDetails()

    Bases

    TypedDict

    Example:

    {
        "audio": 10,
        "cache_creation": 200,
        "cache_read": 100,
    }

    May also hold extra provider-specific keys.

    Constructors

    constructor
    __init__
    NameType
    audioint
    cache_creationint
    cache_readint

    Attributes

    attribute
    audio: int

    Audio input tokens.

    attribute
    cache_creation: int

    Input tokens that were cached and there was a cache miss.

    Since there was a cache miss, the cache was created from these tokens.

    attribute
    cache_read: int

    Input tokens that were cached and there was a cache hit.

    Since there was a cache hit, the tokens were read from the cache. More precisely, the model state given these tokens was read from the cache.

    View source on GitHub