lookup(
self,
prompt: str,
llm_string: str
) -> Optional[RETURN_VAL_TYPELook up the result of a previous language model call in the Redis semantic cache.
This method checks if there's a cached result for a semantically similar prompt and the same language model combination.
Example:
from langchain_openai import OpenAIEmbeddings
cache = RedisSemanticCache(
embeddings=OpenAIEmbeddings(),
redis_url="redis://localhost:6379"
)
prompt = "What's the capital city of France?"
llm_string = "openai/gpt-3.5-turbo"
result = cache.lookup(prompt, llm_string)
if result:
print("Semantic cache hit:", result[0].text)
else:
print("Semantic cache miss")
Note:
llm_string is used to ensure the cached result is from the
same language model.The input prompt for which to look up the cached result.
A string representation of the language model and its parameters.