LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreglobals
    Module●Since v0.1

    globals

    Global values and configuration that apply to all of LangChain.

    Functions

    function
    set_verbose

    Set a new value for the verbose global setting.

    function
    get_verbose

    Get the value of the verbose global setting.

    function
    set_debug

    Set a new value for the debug global setting.

    function
    get_debug

    Get the value of the debug global setting.

    function
    set_llm_cache

    Set a new LLM cache, overwriting the previous value, if any.

    function
    get_llm_cache

    Get the value of the llm_cache global setting.

    Classes

    class
    BaseCache

    Interface for a caching layer for LLMs and Chat models.

    The cache interface consists of the following methods:

    • lookup: Look up a value based on a prompt and llm_string.
    • update: Update the cache based on a prompt and llm_string.
    • clear: Clear the cache.

    In addition, the cache interface provides an async version of each method.

    The default implementation of the async methods is to run the synchronous method in an executor. It's recommended to override the async methods and provide async implementations to avoid unnecessary overhead.

    View source on GitHub