Caches¶
langchain_core.caches
¶
caches
provides an optional caching layer for language models.
Warning
This is a beta feature! Please be wary of deploying experimental code to production unless you've taken appropriate precautions.
A cache is useful for two reasons:
- It can save you money by reducing the number of API calls you make to the LLM provider if you're often requesting the same completion multiple times.
- It can speed up your application by reducing the number of API calls you make to the LLM provider.
InMemoryCache
¶
Bases: BaseCache
Cache that stores things in memory.
METHOD | DESCRIPTION |
---|---|
__init__ |
Initialize with empty cache. |
lookup |
Look up based on |
update |
Update cache based on |
clear |
Clear cache. |
alookup |
Async look up based on |
aupdate |
Async update cache based on |
aclear |
Async clear cache. |
__init__
¶
__init__(*, maxsize: int | None = None) -> None
Initialize with empty cache.
PARAMETER | DESCRIPTION |
---|---|
maxsize
|
The maximum number of items to store in the cache.
If
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If |
lookup
¶
Look up based on prompt
and llm_string
.
PARAMETER | DESCRIPTION |
---|---|
prompt
|
A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.
TYPE:
|
llm_string
|
A string representation of the LLM configuration.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
RETURN_VAL_TYPE | None
|
On a cache miss, return |
update
¶
Update cache based on prompt
and llm_string
.
PARAMETER | DESCRIPTION |
---|---|
prompt
|
A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.
TYPE:
|
llm_string
|
A string representation of the LLM configuration.
TYPE:
|
return_val
|
The value to be cached. The value is a list of
TYPE:
|
alookup
async
¶
Async look up based on prompt
and llm_string
.
PARAMETER | DESCRIPTION |
---|---|
prompt
|
A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.
TYPE:
|
llm_string
|
A string representation of the LLM configuration.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
RETURN_VAL_TYPE | None
|
On a cache miss, return |
aupdate
async
¶
Async update cache based on prompt
and llm_string
.
PARAMETER | DESCRIPTION |
---|---|
prompt
|
A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.
TYPE:
|
llm_string
|
A string representation of the LLM configuration.
TYPE:
|
return_val
|
The value to be cached. The value is a list of
TYPE:
|
BaseCache
¶
Bases: ABC
Interface for a caching layer for LLMs and Chat models.
The cache interface consists of the following methods:
- lookup: Look up a value based on a prompt and
llm_string
. - update: Update the cache based on a prompt and
llm_string
. - clear: Clear the cache.
In addition, the cache interface provides an async version of each method.
The default implementation of the async methods is to run the synchronous method in an executor. It's recommended to override the async methods and provide async implementations to avoid unnecessary overhead.
METHOD | DESCRIPTION |
---|---|
lookup |
Look up based on |
update |
Update cache based on |
clear |
Clear cache that can take additional keyword arguments. |
alookup |
Async look up based on |
aupdate |
Async update cache based on |
aclear |
Async clear cache that can take additional keyword arguments. |
lookup
abstractmethod
¶
Look up based on prompt
and llm_string
.
A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).
PARAMETER | DESCRIPTION |
---|---|
prompt
|
A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.
TYPE:
|
llm_string
|
A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
RETURN_VAL_TYPE | None
|
On a cache miss, return |
RETURN_VAL_TYPE | None
|
The cached value is a list of |
update
abstractmethod
¶
Update cache based on prompt
and llm_string
.
The prompt and llm_string are used to generate a key for the cache. The key should match that of the lookup method.
PARAMETER | DESCRIPTION |
---|---|
prompt
|
A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.
TYPE:
|
llm_string
|
A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.
TYPE:
|
return_val
|
The value to be cached. The value is a list of
TYPE:
|
clear
abstractmethod
¶
clear(**kwargs: Any) -> None
Clear cache that can take additional keyword arguments.
alookup
async
¶
Async look up based on prompt
and llm_string
.
A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).
PARAMETER | DESCRIPTION |
---|---|
prompt
|
A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.
TYPE:
|
llm_string
|
A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
RETURN_VAL_TYPE | None
|
On a cache miss, return |
RETURN_VAL_TYPE | None
|
The cached value is a list of |
aupdate
async
¶
Async update cache based on prompt
and llm_string
.
The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.
PARAMETER | DESCRIPTION |
---|---|
prompt
|
A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.
TYPE:
|
llm_string
|
A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.
TYPE:
|
return_val
|
The value to be cached. The value is a list of
TYPE:
|