langchain_community.cache
.MomentoCacheΒΆ
- class langchain_community.cache.MomentoCache(cache_client: momento.CacheClient, cache_name: str, *, ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]ΒΆ
Cache that uses Momento as a backend. See https://gomomento.com/
Instantiate a prompt cache using Momento as a backend.
Note: to instantiate the cache client passed to MomentoCache, you must have a Momento account. See https://gomomento.com/.
- Parameters
cache_client (CacheClient) β The Momento cache client.
cache_name (str) β The name of the cache to use to store the data.
ttl (Optional[timedelta], optional) β The time to live for the cache items. Defaults to None, ie use the client default TTL.
ensure_cache_exists (bool, optional) β Create the cache if it doesnβt exist. Defaults to True.
- Raises
ImportError β Momento python package is not installed.
TypeError β cache_client is not of type momento.CacheClientObject
ValueError β ttl is non-null and non-negative
Methods
__init__
(cache_client, cache_name, *[, ttl, ...])Instantiate a prompt cache using Momento as a backend.
aclear
(**kwargs)Clear cache that can take additional keyword arguments.
alookup
(prompt, llm_string)Look up based on prompt and llm_string.
aupdate
(prompt, llm_string, return_val)Update cache based on prompt and llm_string.
clear
(**kwargs)Clear the cache.
from_client_params
(cache_name, ttl, *[, ...])Construct cache from CacheClient parameters.
lookup
(prompt, llm_string)Lookup llm generations in cache by prompt and associated model and settings.
update
(prompt, llm_string, return_val)Store llm generations in cache.
- __init__(cache_client: momento.CacheClient, cache_name: str, *, ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]ΒΆ
Instantiate a prompt cache using Momento as a backend.
Note: to instantiate the cache client passed to MomentoCache, you must have a Momento account. See https://gomomento.com/.
- Parameters
cache_client (CacheClient) β The Momento cache client.
cache_name (str) β The name of the cache to use to store the data.
ttl (Optional[timedelta], optional) β The time to live for the cache items. Defaults to None, ie use the client default TTL.
ensure_cache_exists (bool, optional) β Create the cache if it doesnβt exist. Defaults to True.
- Raises
ImportError β Momento python package is not installed.
TypeError β cache_client is not of type momento.CacheClientObject
ValueError β ttl is non-null and non-negative
- async aclear(**kwargs: Any) None ΒΆ
Clear cache that can take additional keyword arguments.
- Parameters
kwargs (Any) β
- Return type
None
- async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]] ΒΆ
Look up based on prompt and llm_string.
- Parameters
prompt (str) β
llm_string (str) β
- Return type
Optional[Sequence[Generation]]
- async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None ΒΆ
Update cache based on prompt and llm_string.
- Parameters
prompt (str) β
llm_string (str) β
return_val (Sequence[Generation]) β
- Return type
None
- clear(**kwargs: Any) None [source]ΒΆ
Clear the cache.
- Raises
SdkException β Momento service or network error
- Parameters
kwargs (Any) β
- Return type
None
- classmethod from_client_params(cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, api_key: Optional[str] = None, auth_token: Optional[str] = None, **kwargs: Any) MomentoCache [source]ΒΆ
Construct cache from CacheClient parameters.
- Parameters
cache_name (str) β
ttl (timedelta) β
configuration (Optional[momento.config.Configuration]) β
api_key (Optional[str]) β
auth_token (Optional[str]) β
kwargs (Any) β
- Return type
- lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]] [source]ΒΆ
Lookup llm generations in cache by prompt and associated model and settings.
- Parameters
prompt (str) β The prompt run through the language model.
llm_string (str) β The language model version and settings.
- Raises
SdkException β Momento service or network error
- Returns
A list of language model generations.
- Return type
Optional[RETURN_VAL_TYPE]
- update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None [source]ΒΆ
Store llm generations in cache.
- Parameters
prompt (str) β The prompt run through the language model.
llm_string (str) β The language model string.
return_val (RETURN_VAL_TYPE) β A list of language model generations.
- Raises
SdkException β Momento service or network error
Exception β Unexpected response
- Return type
None