langchain.js
    Preparing search index...

    A cache that uses Vercel KV as the backing store.

    const cache = new VercelKVCache({
    ttl: 3600, // Optional: Cache entries will expire after 1 hour
    });

    // Initialize the OpenAI model with Vercel KV cache for caching responses
    const model = new ChatOpenAI({
    model: "gpt-4o-mini",
    cache,
    });
    await model.invoke("How are you today?");
    const cachedValues = await cache.lookup("How are you today?", "llmKey");

    Hierarchy (View Summary)

    Index

    Constructors

    Methods

    Constructors

    Methods

    • Lookup LLM generations in cache by prompt and associated LLM key.

      Parameters

      • prompt: string
      • llmKey: string

      Returns Promise<null | Generation[]>

    • Update the cache with the given generations.

      Note this overwrites any existing generations for the given prompt and LLM key.

      Parameters

      • prompt: string
      • llmKey: string
      • value: Generation[]

      Returns Promise<void>