langchain_community.cache.GPTCache¶

class langchain_community.cache.GPTCache(init_func: Optional[Union[Callable[[Any, str], None], Callable[[Any], None]]] = None)[source]¶

Cache that uses GPTCache as a backend.

Initialize by passing in init function (default: None).

Parameters
  • init_func (Optional[Callable[[Any], None]]) – init GPTCache function

  • (default – None)

Example: .. code-block:: python

# Initialize GPTCache with a custom init function import gptcache from gptcache.processor.pre import get_prompt from gptcache.manager.factory import get_data_manager from langchain_community.globals import set_llm_cache

# Avoid multiple caches using the same file, causing different llm model caches to affect each other

def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(

pre_embedding_func=get_prompt, data_manager=manager_factory(

manager=”map”, data_dir=f”map_cache_{llm}”

),

)

set_llm_cache(GPTCache(init_gptcache))

Methods

__init__([init_func])

Initialize by passing in init function (default: None).

aclear(**kwargs)

Clear cache that can take additional keyword arguments.

alookup(prompt, llm_string)

Look up based on prompt and llm_string.

aupdate(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

clear(**kwargs)

Clear cache.

lookup(prompt, llm_string)

Look up the cache data.

update(prompt, llm_string, return_val)

Update cache.

__init__(init_func: Optional[Union[Callable[[Any, str], None], Callable[[Any], None]]] = None)[source]¶

Initialize by passing in init function (default: None).

Parameters
  • init_func (Optional[Callable[[Any], None]]) – init GPTCache function

  • (default – None)

Example: .. code-block:: python

# Initialize GPTCache with a custom init function import gptcache from gptcache.processor.pre import get_prompt from gptcache.manager.factory import get_data_manager from langchain_community.globals import set_llm_cache

# Avoid multiple caches using the same file, causing different llm model caches to affect each other

def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(

pre_embedding_func=get_prompt, data_manager=manager_factory(

manager=”map”, data_dir=f”map_cache_{llm}”

),

)

set_llm_cache(GPTCache(init_gptcache))

async aclear(**kwargs: Any) None¶

Clear cache that can take additional keyword arguments.

Parameters

kwargs (Any) –

Return type

None

async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]]¶

Look up based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Sequence[Generation]]

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None¶

Update cache based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

  • return_val (Sequence[Generation]) –

Return type

None

clear(**kwargs: Any) None[source]¶

Clear cache.

Parameters

kwargs (Any) –

Return type

None

lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Look up the cache data. First, retrieve the corresponding cache object using the llm_string parameter, and then retrieve the data from the cache based on the prompt.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Sequence[Generation]]

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Update cache. First, retrieve the corresponding cache object using the llm_string parameter, and then store the prompt and return_val in the cache object.

Parameters
  • prompt (str) –

  • llm_string (str) –

  • return_val (Sequence[Generation]) –

Return type

None

Examples using GPTCache¶