Cache LLM results using Redis.
Time-to-live (TTL) for cached items in seconds.
Retrieves data from the Redis server using a prompt and an LLM key. If the data is not found, it returns null.
Sets a custom key encoder function for the cache. This function should take a prompt and an LLM key and return a string that will be used as the cache key.
Updates the data in the Redis server using a prompt and an LLM key.
const model = new ChatOpenAI({
model: "gpt-4o-mini",
cache: new RedisCache(new Redis(), { ttl: 60 }),
});
// Invoke the model with a prompt
const response = await model.invoke("Do something random!");
console.log(response);
// Remember to disconnect the Redis client when done
await redisClient.disconnect();