LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreglobalsset_llm_cache
    Function●Since v0.1

    set_llm_cache

    Set a new LLM cache, overwriting the previous value, if any.

    Copy
    set_llm_cache(
        value: Optional[BaseCache],
    ) -> None

    Used in Docs

    • Astra DB integrations
    • Cassandra integrations
    • Couchbase integrations
    • CrateDB integrations
    • Momento integrations

    Parameters

    NameTypeDescription
    value*Optional[BaseCache]

    The new LLM cache to use. If None, the LLM cache is disabled.

    View source on GitHub