LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelsbaseBaseLanguageModelcache
    Attribute●Since v0.1

    cache

    Whether to cache the response.

    • If True, will use the global cache.
    • If False, will not use a cache
    • If None, will use the global cache if it's set, otherwise no cache.
    • If instance of BaseCache, will use the provided cache.

    Caching is not currently supported for streaming methods of models.

    Copy
    cache: BaseCache | bool | None = Field(default=None, exclude=True)
    View source on GitHub