langchain.js
    Preparing search index...

    Cache LLM results using Redis.

    const model = new ChatOpenAI({
    model: "gpt-4o-mini",
    cache: new RedisCache(new Redis(), { ttl: 60 }),
    });

    // Invoke the model with a prompt
    const response = await model.invoke("Do something random!");
    console.log(response);

    // Remember to disconnect the Redis client when done
    await redisClient.disconnect();

    Hierarchy (View Summary)

    Index

    Constructors

    Properties

    Methods

    Constructors

    • Parameters

      • redisClient: Redis
      • Optionalconfig: { ttl?: number }

      Returns RedisCache

    Properties

    redisClient: Redis
    ttl?: number

    Methods

    • Retrieves data from the Redis server using a prompt and an LLM key. If the data is not found, it returns null.

      Parameters

      • prompt: string

        The prompt used to find the data.

      • llmKey: string

        The LLM key used to find the data.

      Returns Promise<null | Generation[]>

      The corresponding data as an array of Generation objects, or null if not found.

    • Updates the data in the Redis server using a prompt and an LLM key.

      Parameters

      • prompt: string

        The prompt used to store the data.

      • llmKey: string

        The LLM key used to store the data.

      • value: Generation[]

        The data to be stored, represented as an array of Generation objects.

      Returns Promise<void>