Update the cache with a new result for a given prompt and language model.
This method stores a new result in the Redis cache for the specified prompt and language model combination.
update(
self,
prompt: str,
llm_string: str,
return_val: RETURN_VAL_TYPE
) -> NoneExample:
from langchain_core.outputs import Generation
cache = RedisCache(redis_url="redis://localhost:6379", ttl=3600)
prompt = "What is the capital of France?"
llm_string = "openai/gpt-3.5-turbo"
result = [Generation(text="The capital of France is Paris.")]
cache.update(prompt, llm_string, result)
Note:
prompt and llm_string to create the
cache key.prompt and llm_string,
it will be overwritten.| Name | Type | Description |
|---|---|---|
prompt* | str | The input prompt associated with the result. |
llm_string* | str | A string representation of the language model and its parameters. |
return_val* | RETURN_VAL_TYPE | The result to be cached, typically a list
containing a single |