update(
self,
prompt: str,
llm_string: str,
return_val: RETURN_VAL_TYPE
) -> None| Name | Type | Description |
|---|---|---|
prompt* | str | The input prompt associated with the result. |
llm_string* | str | A string representation of the language model and its parameters. |
return_val* | RETURN_VAL_TYPE | The result to be cached, typically a list
containing a single |
Update the semantic cache with a new result for a given prompt and language model.
This method stores a new result in the Redis semantic cache for the specified prompt and language model combination, using vector embedding for semantic similarity.
Example:
from langchain_core.outputs import Generation
from langchain_openai import OpenAIEmbeddings
cache = RedisSemanticCache(
embeddings=OpenAIEmbeddings(),
redis_url="redis://localhost:6379"
)
prompt = "What is the capital of France?"
llm_string = "openai/gpt-3.5-turbo"
result = [Generation(text="The capital of France is Paris.")]
cache.update(prompt, llm_string, result)
Note:
prompt, llm_string, and result, is stored
in the Redis cache.