langchain_community.cache.GPTCache¶

class langchain_community.cache.GPTCache(init_func: Optional[Union[Callable[[Any, str], None], Callable[[Any], None]]] = None)[source]¶

Cache that uses GPTCache as a backend.

Initialize by passing in init function (default: None).

Parameters
  • init_func (Optional[Callable[[Any], None]]) – init GPTCache function

  • (default – None)

Example: .. code-block:: python

# Initialize GPTCache with a custom init function import gptcache from gptcache.processor.pre import get_prompt from gptcache.manager.factory import get_data_manager from langchain_community.globals import set_llm_cache

# Avoid multiple caches using the same file, causing different llm model caches to affect each other

def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(

pre_embedding_func=get_prompt, data_manager=manager_factory(

manager=”map”, data_dir=f”map_cache_{llm}”

),

)

set_llm_cache(GPTCache(init_gptcache))

Methods

__init__([init_func])

Initialize by passing in init function (default: None).

clear(**kwargs)

Clear cache.

lookup(prompt, llm_string)

Look up the cache data.

update(prompt, llm_string, return_val)

Update cache.

__init__(init_func: Optional[Union[Callable[[Any, str], None], Callable[[Any], None]]] = None)[source]¶

Initialize by passing in init function (default: None).

Parameters
  • init_func (Optional[Callable[[Any], None]]) – init GPTCache function

  • (default – None)

Example: .. code-block:: python

# Initialize GPTCache with a custom init function import gptcache from gptcache.processor.pre import get_prompt from gptcache.manager.factory import get_data_manager from langchain_community.globals import set_llm_cache

# Avoid multiple caches using the same file, causing different llm model caches to affect each other

def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(

pre_embedding_func=get_prompt, data_manager=manager_factory(

manager=”map”, data_dir=f”map_cache_{llm}”

),

)

set_llm_cache(GPTCache(init_gptcache))

clear(**kwargs: Any) None[source]¶

Clear cache.

lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Look up the cache data. First, retrieve the corresponding cache object using the llm_string parameter, and then retrieve the data from the cache based on the prompt.

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Update cache. First, retrieve the corresponding cache object using the llm_string parameter, and then store the prompt and return_val in the cache object.

Examples using GPTCache¶