A cache that uses Momento as the backing store. See https://gomomento.com.
Retrieves data from the Redis server using a prompt and an LLM key. If the data is not found, it returns null.
Sets a custom key encoder function for the cache. This function should take a prompt and an LLM key and return a string that will be used as the cache key.
Updates the data in the Redis server using a prompt and an LLM key.
Create a new standard cache backed by Momento.
const cache = new MomentoCache({
client: new CacheClient({
configuration: Configurations.Laptop.v1(),
credentialProvider: CredentialProvider.fromEnvironmentVariable({
environmentVariableName: "MOMENTO_API_KEY",
}),
defaultTtlSeconds: 60 * 60 * 24, // Cache TTL set to 24 hours.
}),
cacheName: "langchain",
});
// Initialize the OpenAI model with Momento cache for caching responses
const model = new ChatOpenAI({
model: "gpt-4o-mini",
cache,
});
await model.invoke("How are you today?");
const cachedValues = await cache.lookup("How are you today?", "llmKey");