langchain.js
    Preparing search index...

    A cache that uses Upstash as the backing store. See https://docs.upstash.com/redis.

    const cache = new UpstashRedisCache({
    config: {
    url: "UPSTASH_REDIS_REST_URL",
    token: "UPSTASH_REDIS_REST_TOKEN",
    },
    ttl: 3600, // Optional: Cache entries will expire after 1 hour
    });
    // Initialize the OpenAI model with Upstash Redis cache for caching responses
    const model = new ChatOpenAI({
    model: "gpt-4o-mini",
    cache,
    });
    await model.invoke("How are you today?");
    const cachedValues = await cache.lookup("How are you today?", "llmKey");

    Hierarchy (View Summary)

    Index

    Constructors

    Methods

    Constructors

    Methods

    • Lookup LLM generations in cache by prompt and associated LLM key.

      Parameters

      • prompt: string
      • llmKey: string

      Returns Promise<null | Generation[]>

    • Update the cache with the given generations.

      Note this overwrites any existing generations for the given prompt and LLM key.

      Parameters

      • prompt: string
      • llmKey: string
      • value: Generation[]

      Returns Promise<void>