A cache that uses Vercel KV as the backing store.
Retrieves data from the Redis server using a prompt and an LLM key. If the data is not found, it returns null.
Sets a custom key encoder function for the cache. This function should take a prompt and an LLM key and return a string that will be used as the cache key.
Updates the data in the Redis server using a prompt and an LLM key.
const cache = new VercelKVCache({
ttl: 3600, // Optional: Cache entries will expire after 1 hour
});
// Initialize the OpenAI model with Vercel KV cache for caching responses
const model = new ChatOpenAI({
model: "gpt-4o-mini",
cache,
});
await model.invoke("How are you today?");
const cachedValues = await cache.lookup("How are you today?", "llmKey");