Retrieves data from the cache. It constructs a cache key from the given
prompt and llmKey, and retrieves the corresponding value from the
Redis database.
Sets a custom key encoder function for the cache. This function should take a prompt and an LLM key and return a string that will be used as the cache key.
Updates the cache with new data. It constructs a cache key from the
given prompt and llmKey, and stores the value in the Redis
database.
const model = new ChatOpenAI({
model: "gpt-4o-mini",
cache: new RedisCache(new Redis(), { ttl: 60 }),
});
// Invoke the model to perform an action
const response = await model.invoke("Do something random!");
console.log(response);