Look up the result of a previous language model call in the Redis cache.
This method checks if there's a cached result for the given prompt and language model combination.
lookup(
self,
prompt: str,
llm_string: str
) -> Optional[RETURN_VAL_TYPE]Example:
cache = RedisCache(redis_url="redis://localhost:6379")
prompt = "What is the capital of France?"
llm_string = "openai/gpt-3.5-turbo"
result = cache.lookup(prompt, llm_string)
if result:
print("Cache hit:", result[0].text)
else:
print("Cache miss")
Note:
prompt and llm_string to create
the cache key.Generation object.None or cannot be parsed,
None is returned.