langchain_community.cache
.RedisSemanticCache¶
- class langchain_community.cache.RedisSemanticCache(redis_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶
Cache that uses Redis as a vector-store backend.
Initialize by passing in the init GPTCache func
- Parameters
redis_url (str) – URL to connect to Redis.
embedding (Embedding) – Embedding provider for semantic encoding and search.
score_threshold (float, 0.2) –
Example:
from langchain_community.globals import set_llm_cache from langchain_community.cache import RedisSemanticCache from langchain_community.embeddings import OpenAIEmbeddings set_llm_cache(RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ))
Attributes
DEFAULT_SCHEMA
Methods
__init__
(redis_url, embedding[, score_threshold])Initialize by passing in the init GPTCache func
aclear
(**kwargs)Clear cache that can take additional keyword arguments.
alookup
(prompt, llm_string)Look up based on prompt and llm_string.
aupdate
(prompt, llm_string, return_val)Update cache based on prompt and llm_string.
clear
(**kwargs)Clear semantic cache for a given llm_string.
lookup
(prompt, llm_string)Look up based on prompt and llm_string.
update
(prompt, llm_string, return_val)Update cache based on prompt and llm_string.
- __init__(redis_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶
Initialize by passing in the init GPTCache func
- Parameters
redis_url (str) – URL to connect to Redis.
embedding (Embedding) – Embedding provider for semantic encoding and search.
score_threshold (float, 0.2) –
Example:
from langchain_community.globals import set_llm_cache from langchain_community.cache import RedisSemanticCache from langchain_community.embeddings import OpenAIEmbeddings set_llm_cache(RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ))
- async aclear(**kwargs: Any) None ¶
Clear cache that can take additional keyword arguments.
- Parameters
kwargs (Any) –
- Return type
None
- async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]] ¶
Look up based on prompt and llm_string.
- Parameters
prompt (str) –
llm_string (str) –
- Return type
Optional[Sequence[Generation]]
- async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None ¶
Update cache based on prompt and llm_string.
- Parameters
prompt (str) –
llm_string (str) –
return_val (Sequence[Generation]) –
- Return type
None
- clear(**kwargs: Any) None [source]¶
Clear semantic cache for a given llm_string.
- Parameters
kwargs (Any) –
- Return type
None
- lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]] [source]¶
Look up based on prompt and llm_string.
- Parameters
prompt (str) –
llm_string (str) –
- Return type
Optional[Sequence[Generation]]
- update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None [source]¶
Update cache based on prompt and llm_string.
- Parameters
prompt (str) –
llm_string (str) –
return_val (Sequence[Generation]) –
- Return type
None