LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelsllmsaupdate_cache
    Function●Since v0.1

    aupdate_cache

    Update the cache and get the LLM output. Async version.

    Copy
    aupdate_cache(
      cache: BaseCache | bool | None,
      existing_prompts: dict[int, list],
      llm_string: str,
      missing_prompt_idxs: list[int],
      new_results: LLMResult,
      prompts: list[str]
    ) -> dict | None

    Parameters

    NameTypeDescription
    cache*BaseCache | bool | None

    Cache object.

    existing_prompts*dict[int, list]

    Dictionary of existing prompts.

    llm_string*str

    LLM string.

    missing_prompt_idxs*list[int]

    List of missing prompt indexes.

    new_results*LLMResult

    LLMResult object.

    prompts*list[str]

    List of prompts.

    View source on GitHub